<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[/dev/notes]]></title><description><![CDATA[Aaron's Development Blog]]></description><link>https://aaronbonner.io</link><generator>RSS for Node</generator><lastBuildDate>Sun, 10 May 2026 10:28:04 GMT</lastBuildDate><atom:link href="https://aaronbonner.io/rss" rel="self" type="application/rss+xml"/><author><![CDATA[Aaron Bonner]]></author><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item><item><title><![CDATA[Build PHP-CS-Fixer Phar from Source]]></title><description><![CDATA[<p><a href="http://cs.sensiolabs.org/">PHP-CS-Fixer</a> is yet another extremely handy utility to emerge from <a href="http://sensiolabs.org">Sensiolabs</a> to help manage code style compliance.</p>
<p>I use it as part of my git workflow (see my <a href="https://github.com/ajbonner/magento-githooks">magento githooks repo</a>) and I find it really does help keep code consistent.</p>
<p>When you work on projects with more than a couple of developers, niggling differences in style can lead to unintentional errors creeping into your codebase (think omitting braces in if statements, for example).</p>
<p>Anyway, one of the neat features in the master branch is the ability to specify a config file to control how php-cs-fixer behaves rather than having to define everything on the commandline (or depend on .php_cs being in the path). Unfortunately this feature is not yet in a pre-baked release of php-cs-fixer. So to use it you have to use the source version. I quite like the convenience of the phar version but it&#39;s unclear how to build one directly from the sources.</p>
<p>After a bit of digging around in the code and issues list, I found they are using the <a href="https://github.com/kherge/php-box">php-box</a> project to build release phars. It&#39;s actually very simple, but to save others having to figure it all out, just follow these steps.</p>
<pre><code>$ git clone https://github.com/fabpot/PHP-CS-Fixer
$ cd PHP-CS-Fixer
$ composer.phar require --dev &#39;kherge/box=~2.4&#39;
$ vendor/kherge/box/bin/box build
&gt; Building...
</code></pre>
<p>Do a quick <em>ls</em> and you&#39;ll notice you have a minty fresh <em>php-cs-fixer.phar</em> file.</p>
<p>Done.</p>
]]></description><link>http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/82386070612/build-php-cs-fixer-phar-from-source</guid><category><![CDATA[php-cs-fixer]]></category><category><![CDATA[phar]]></category><category><![CDATA[php-box]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Apr 2014 14:29:22 GMT</pubDate></item><item><title><![CDATA[Append a User to a Supplemental Group on Mac OSX]]></title><description><![CDATA[<p>If you&#39;re trying to add a user to a group under OSX you might get stumped. This is straight forward enough on linux right, you go </p>
<pre><code>$ usermod -a -G thegroup theuser
</code></pre>
<p>And job done. But OSX uses <a href="http://en.wikipedia.org/wiki/Apple_Open_Directory">Open Directory</a> rather than traditional flatfiles like /etc/passwd and /etc/group to store information about users and domains. So the typical unix commands we are used to, don&#39;t work.</p>
<p>The <em>dscl</em> (directory service command line) utility lets you manipulate Open Directory values and in our case, add a user to an additional group, handy if for example, you want to add your user to the wheel group to make use of password free <em>sudo</em>.</p>
<pre><code>$ dscl localhost --append /Local/Default/Groups/&lt;groupnamehere&gt; &lt;usernamehere&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/81593529751/append-a-user-to-a-supplemental-group-on-mac-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Apr 2014 16:28:52 GMT</pubDate></item><item><title><![CDATA[Git Alias to Simplify Setting Upstream Branch]]></title><description><![CDATA[<p>One annoying thing about git, is if you push a new local branch to a remote repository and forget the -u argument (short for --set-upstream), it does not automatically set the local branch to track the remote. I forget to include this argument most of the time.</p>
<p>So, later on, you&#39;ll probably want to pull changes down from the remote and you&#39;ll end up seeing something similar to this</p>
<pre><code>➜  store git:(zendesk) git pull --rebase                                                          [48/53]There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
See git-pull(1) for details

git pull &lt;remote&gt; &lt;branch&gt;

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/&lt;branch&gt; zendesk
</code></pre>
<p>Now it&#39;s not <em>that</em> hard to type out the suggested command above to set the upstream branch, but I got sick of having to do it so often, and I have given up trying to remember -u, so I created a git alias to automate things and save some keystrokes.</p>
<p>In your ~/.gitconfig under the alias section, add this</p>
<pre><code>    sup = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`
</code></pre>
<p>You can use the alias by issuing the following command in your terminal</p>
<pre><code>$ git sup
</code></pre>
<p>This will look at the current branch and set its upstream to origin/branchname</p>
<p>If you tend to use another remote name other than origin, change the alias accordingly.</p>
<p>I have a few other useful aliases which you can checkout (hah, sorry :)) in my full <a href="https://github.com/ajbonner/unix/blob/master/gitconfig">gitconfig</a>.</p>
]]></description><link>http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/80766268890/git-alias-to-simplify-setting-upstream-branch</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 26 Mar 2014 09:57:00 GMT</pubDate></item><item><title><![CDATA[Passing Variables along a Bash Pipeline]]></title><description><![CDATA[<p>Today I learned that each sequence in a bash pipeline executes in a separate subshell…this means variables cannot be passed along the pipeline, as each new subprocess invokes a brand new environment.</p>
<p>For some workarounds checkout <a href="http://mywiki.wooledge.org/BashFAQ/024">http://mywiki.wooledge.org/BashFAQ/024</a></p>
]]></description><link>http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</link><guid isPermaLink="true">http://aaronbonner.io/post/80359431606/passing-variables-along-a-bash-pipeline</guid><category><![CDATA[bash]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Mar 2014 14:21:55 GMT</pubDate></item><item><title><![CDATA[Only Allow Git Fast-forward Merges to Avoid Ugly Merge Commits]]></title><description><![CDATA[<p>No one likes merge commits, they add noise to git history logs without really helping to convey what exact changes have occurred.</p>
<p>Usually these types of commits can be avoided by keeping feature branches up to date with git --rebase. When two branches have a direct common history, merges can be applied using the fast-forward strategy avoiding the need for a stitch-things-together merge commit.</p>
<pre><code>Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history.
</code></pre>
<p><a href="http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging">http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging</a></p>
<p>To ensure you&#39;ve kept your branches synced up with rebase and to avoid accidentally creating a merge commit, you can set git merge to only perform fast forward merges.</p>
<pre><code>$ git config --global merge.ff only
</code></pre>
<p>This way, you&#39;ll get a gentle reminder to rebase. If that&#39;s not feasible then you can force through the merge with</p>
<pre><code>$ git merge --no-ff
</code></pre>
]]></description><link>http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</link><guid isPermaLink="true">http://aaronbonner.io/post/78444674979/only-allow-git-fast-forward-merges-to-avoid-ugly</guid><category><![CDATA[git]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Mar 2014 15:40:43 GMT</pubDate></item><item><title><![CDATA[Book Review: The Grumpy Programmer's PHPUnit Cookbook]]></title><description><![CDATA[<p>I&#39;ve had this book on my reading list for a little while now and I got through it in a single sitting yesterday so I thought I&#39;d chuck up a quick review for it.</p>
<p><img src="/images/tumblr_inline_motdcfGcvJ1qz4rgp.jpg" alt=""></p>
<ul>
<li><a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></li>
<li>Price: $29.00</li>
</ul>
<p>The Grumpy Programmer (actual name Chris Hartjes) amusingly <a href="http://www.littlehart.net/atthekeyboard/">blogs</a> and <a href="https://twitter.com/grmpyprogrammer">tweets</a> all things PHP and particularly PHPUnit. When I saw he was publishing this book, I was curious to see how his strident style would stand up to the longer form. Pleasantly, it turns out. </p>
<p>Chris maintains his gruff voice while whirling through the ins and outs of using PHPUnit. I&#39;ve been using PHPUnit for a long time now and I find that when I am really familiar with a tool, I tend to re/(over)use patterns that have served me well in the past. So while I found I was familiar with much of the material in the book, there are more than a few tidbits here that I picked up. I feel even the most grizzled PHPUnit veteran&#39;s testing regime will benefit from a read through.</p>
<p>The book seems aimed at the less experienced which I did find a little surprising given the title. When I think cookbook I tend to think of the weighty O&#39;Reilly tomes. This book though is more like a lengthy tutorial than a cookbook in the O&#39;Reilly style. As a tutorial goes though it excels. It is detailed without being turgid and covers all the major aspects of using PHPUnit that I would expect it to and then some. I found the chapter on Test Doubles (that is mocks, stubs and fakes) to be particularly excellent. The vocabulary surrounding these terms tends to get mixed up and consequently programmers often treat them as the same thing. That leads in my experience, at best, to confusion. And at worst, to poor tests that are difficult to maintain.</p>
<p>As a quick aside, the book is published by <a href="https://leanpub.com/">LeanPub</a>, who ensure authors receive 90% of the proceeds from their work. I think this is a wonderful initiative. Writing, especially for a programmer is tremendously hard and I like the idea of those that attempt it, and do a good job, get appropriately rewarded for doing so. </p>
<p>So, back to the book. You find peppered throughout the introduction to PHPUnit subtle wisdoms that are hard to argue with. A simple and you would think obvious example, is that of always providing the final argument to assert statements with a description message. This message is displayed when the test fails, helping you quickly identify where the problem lies. Another: writing strictly encapsulated code that eschews static methods and class variables, is (well unsurprisingly) easier to test that code that is constantly mutating global state.</p>
<p>The book is quite short, coming in (at least in my pdf version) at 85 pages. I feel like there is sufficient scope for more content here. Especially for a &#39;cookbook&#39;. I would have loved to have seen more on using <a href="http://nat.truemesh.com/archives/000714.html">Data Builders</a> for example. The chapter on data providers is great, but I find you often need more fine grained control over your fixtures. <a href="http://metabates.com/2010/08/15/fixtures-v-factories-cant-we-all-just-get-along/">Factories</a> and data builders are a couple concepts that once learned, significantly reduce the friction of TDD. </p>
<p>I perhaps would also liked to have seen more in introduction to TDD itself, motivations for it, and perhaps a brief comparison between the two principle TDD xUnit styles. Specifically the Statist TDD and Mockist/London School TDD styles. The former being a test style mainly interested in setting up some state, running a behaviour and checking the end state matches what you expected. The Mockist approach is less interested in observing State and instead is more interested in the messages passed between objects (method calls between collaborators).</p>
<p>Overall I enjoyed the book, and it fills a much needed role in guiding budding PHP TDD practitioners in the use of the most mature tool we have available in PHP. I picked up a few neat new tricks and I suspect many PHP programmers will do the same. </p>
<p>You can buy it now at <a href="http://www.grumpy-phpunit.com/">grumpy-phpunit.com</a></p>
]]></description><link>http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/53622497460/book-review-the-grumpy-programmers-phpunit</guid><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 22 Jun 2013 21:36:00 GMT</pubDate></item><item><title><![CDATA[SQL: The smallest value in a table greater than zero]]></title><description><![CDATA[<p>A quick note to help me remember how to do this.</p>
<p>The problem: you want to select the smallest value from a set of values.</p>
<p>Let&#39;s say you have a table of products that are in a logical group and you want to select the lowest priced product from that group, however some products actually have a 0.00 price (for whatever reason). You don&#39;t want to actually show 0.00 as the lowest price for this group of products, you want to show the lowest price that happens to be greater than zero.</p>
<p>MySQL has a neat way to do this. Simply go: </p>
<pre><code>SELECT tableref.group_id, MIN(NULLIF(tableref.column, 0)) as min_price FROM tableref GROUP BY tableref.group_id;
</code></pre>
<p>The magic is in the <a href="http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_nullif">NULLIF</a> function, which will return null if tableref.column is equal to 0. Returning null removes that value from inclusion by MIN, having the effect of forcing the column value to be greater than zero.</p>
]]></description><link>http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</link><guid isPermaLink="true">http://aaronbonner.io/post/47096086502/sql-the-smallest-value-in-a-table-greater-than</guid><category><![CDATA[sql]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 04 Apr 2013 08:44:00 GMT</pubDate></item><item><title><![CDATA[A Guide to PHP, MySQL and Nginx on Macports]]></title><description><![CDATA[<p>By now, I&#39;m pretty much used to and accept OSX as a desktop operating system. I remember it being quite a change when I first moved over (from Gentoo linux and Gnome 2). The mouse movement was wonky, I had to overcome years of muscle memory (learning to use the cmd instead of control key), and probably hardest of all, was leaving behind Unix&#39;s idea of workspaces and virtual desktops. What I gave up in configurability though, was more than made up for by consistency and stability. Colleagues of mine can attest to the number of expletives launched at an emerge -vuND world that detonated my Gentoo Desktop.</p>
<p>So I&#39;m happy with a less flexible, but attractive, functional and predictable desktop and I think many others feel the same way. It&#39;s no real surprise to me then, that OSX has mostly killed off the idea of Linux on the Desktop. </p>
<p>But somewhere that OSX falls severely behind, is its use of a BSD inspired Unix implementation. If you&#39;re born and raised on a diet of GNU (file|core)utils, of apt, yum, and portage, heck even sysvinit, OSX&#39;s realisation of Unix leaves a lot to be desired. </p>
<p>With considerable effort and some patience though, OSX can be brought to heel. With Iterm2 and Macports you can have a functional GNUlike Unix experience.</p>
<p>I&#39;ll go over the minutiae of my Macports setup another time, but generally speaking I replace all the default OSX tools with GNU equivalents and favour /opt/local/bin over everything else. It means I can have one set of configs which work mostly unchanged across Linux and OSX instances.</p>
<p>Macports is pretty good and the folks that contribute to it do a great job. But it does lack the polish that you take for granted with the Linux package managers. Another point to keep in mind is Macports, like Portage and BSD Ports, is a source-code based &#39;package&#39; manager. When you install something, it is compiled right there and then on your system. When things go wrong, unless you&#39;re a competent C programmer (and even then) you&#39;re going to have a bad time.</p>
<p>One last thing to remember too, is OSX defaults to a case insensitive (but thankfully case-preserving) HFS filesystem. By default, PHP and php appear as the same thing to HFS.</p>
<p>So the point of this blog is to go over getting PHP running natively with Macports and how we can run an instance of <a href="http://www.magentocommerce.com">Magento</a> and the <a href="https://github.com/magento/taf">Magento Test Automation Framework</a> (TAF). </p>
<h2>MySQL</h2>
<p>MySQL is probably the easiet part of the whole thing to setup. So let&#39;s start there. For reference, the database files are stored under /opt/local/var/db/mysql55. </p>
<p>In Macports MySQL carrys a namespace of sorts by the way of a version suffix (as does PHP). This lets multiple versions of a package be installed side-by-side. The drawback is rather than having a mysql command, you have a mysql55 command. That&#39;s annoying. So we will install mysql_select which lets us select a version to activate and give us proper file names.</p>
<pre><code>$ sudo port install mysql55-server mysql55 mysql_select
$ sudo port select mysql mysql55
$ sudo port load mysql55-server
</code></pre>
<p>We will want a database for our magento application.</p>
<pre><code>$ mysqladmin -uroot -p create magento 
</code></pre>
<h2>PHP / PHP-FPM</h2>
<p>Now we want to install PHP, PHP-FPM and the extensions Magento and TAF require. </p>
<pre><code>$ sudo port install php54 php54-fpm php54-curl php54-APC php54-gd php54-pcntl php54-gd php54-mcrypt php54-iconv php54-soap php54-yaml php54-xdebug php54-openssl php54-mysql php54-pear php_select pear-PEAR

$ cd /opt/local/etc/php54
$ cp php-fpm.conf.default php-fpm.conf
$ cp php.ini-development php.ini

$ sudo vim php.ini
# set date.timezone and cgi.fix_pathinfo = 0

$ sudo vim php-fpm.conf
# make any changes for min / max num servers, error logging etc
</code></pre>
<p>The MySQL extension needs a little bit of prodding to look in the correct location for mysql.sock</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee --append /opt/local/var/db/mysql.ini
</code></pre>
<p>Once PHP-FPM is installed and configured you can use Macports to tell launchd to start it automatically.</p>
<pre><code>$ sudo port load php54-fpm
</code></pre>
<h3>PHP-Select</h3>
<p>As with MySQL, Macports lets you install multiple versions of PHP side by side. This can be handy if you want to run PHP 5.3 and PHP 5.4 at the same time. I just install a single version, but Macports effectively namespaces everything. So rather than &#39;/opt/local/bin/php&#39; you have &#39;/opt/local/bin/php54&#39;. PHP Select, which we installed earlier fixes this by effectively &#39;activating&#39; one version and creating the usual executable names we&#39;re accustomed to.</p>
<pre><code>$ sudo port select php php54 
</code></pre>
<h3>PEAR</h3>
<p>PEAR is the single biggest pain in the whole process. And with some research it turns out its because Macports PEAR isn&#39;t even meant be used by end users (WAT?!).</p>
<blockquote>
<p>There is no MacPorts port that installs the pear package manager application with the intent that it be used by the end user outside a MacPorts port install. If you want to use pear manually on your own then you should install it using gopear, composer or some other method.
<a href="http://trac.macports.org/ticket/37683">http://trac.macports.org/ticket/37683</a></p>
</blockquote>
<p>So this goes a long way to explaining why Macports doesn&#39;t set PEAR up with sane defaults, or even put the pear command in the default path. But we can sort this all out easily enough ourselves.</p>
<pre><code>$ sudo pear config-set php_bin /opt/local/bin/php
$ sudo pear config-set php_dir /opt/local/lib/php/pear
$ sudo pear config-set ext_dir /opt/local/lib/php54/extensions/no-debug-non-zts-20100525
$ sudo pear config-set bin_dir /opt/local/bin
$ sudo pear config-set cfg_dir /opt/local/lib/php/pear/cfg
$ sudo pear config-set doc_dir /opt/local/lib/php/pear/docs
$ sudo pear config-set www_dir /opt/local/lib/php/pear/www
$ sudo pear config-set test_dir /opt/local/lib/php/pear/tests
$ sudo pear config-set data_dir /opt/local/lib/php/pear/data
$ echo &#39;PATH=$PATH:/opt/local/lib/php/pear/bin&#39; &gt;&gt; ~/.bashrc # or zshrc if you use zsh
</code></pre>
<p>Another issue you&#39;ll possibly have with PEAR, is it will default to the system PHP executable (/usr/bin/php) rather than your active Macports one. The pear command does test for an environment variable so we can set up an alias to pass this variable to pear on invocation.</p>
<p>Add an alias to your bashrc/zshrc in the form:</p>
<pre><code>alias pear=&#39;PHP_PEAR_PHP_BIN=php pear&#39;
</code></pre>
<p>Reload your bashrc/zshrc.</p>
<pre><code>$ source .bashrc (or source .zshrc)
</code></pre>
<p>Now the alias is active we can check that it&#39;s working</p>
<pre><code>$ /opt/local/lib/php/pear/bin/pear version
PEAR Version: 1.9.4
PHP Version: 5.3.15
Zend Engine Version: 2.3.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64

$ pear version
PEAR Version: 1.9.4
PHP Version: 5.4.12
Zend Engine Version: 2.4.0
Running on: Darwin avalanche 12.2.0 Darwin Kernel Version 12.2.0: Sat Aug 25 00:48:52 PDT 2012; root:xnu-2050.18.24~1/RELEASE_X86_64 x86_64
</code></pre>
<p>Now to make installing  PEAR packages easier I turn the channel autodiscovery option on, which means you don&#39;t have to manually add channels for package dependencies (which there are a lot when installing phing or phpunit…)</p>
<pre><code>$ sudo pear config-set auto_discover 1
</code></pre>
<p>Now add phing and phpunit and install them with all their optional dependencies and some extra packages for the Magento TAF.</p>
<pre><code>$ sudo pear channel-discover pear.phing.info
$ sudo pear channel-discover pear.phpunit.de
$ sudo pear channel-discover pear.symfony-project.com
$ sudo pear install --alldeps phing/phing 
$ sudo pear install --alldeps phpunit/phpunit
$ sudo pear install phpunit/PHP_Invoker
$ sudo pear install phpunit/PHPUnit_Selenium
$ sudo pear install -f symfony/YAML
</code></pre>
<h3>PECL/Extensions</h3>
<p>Macports by default creates .ini files to load extensions in /opt/local/var/db/php54. If you manually build any extensions, add the appropriate ini file here, for example:</p>
<pre><code>$ echo &#39;extension=yaml.so&#39; | sudo tee /opt/local/var/db/php54/yaml.ini
</code></pre>
<h2>Nginx</h2>
<p>Apache/Nginx. It doesn&#39;t really matter. Both are great, but in production I use Nginx so I use it in development too. I install it with just the ssl extension enabled, to see the full range of available options, use:</p>
<pre><code>$ sudo port variants nginx 
</code></pre>
<p>To install:</p>
<pre><code>$ sudo port install nginx +ssl
$ cd /opt/local/etc/nginx
$ sudo cp fastcgi.conf.default fastcgi.conf
$ sudo cp fastcgi_params.default fastcgi_params
$ sudo cp mime.types.default mime.types
$ sudo cp nginx.conf.default nginx.conf
$ sudo mkdir conf.d sites-available sites-enabled ssl
</code></pre>
<p>Once installed, Nginx requires a little bit of work to hook up to PHP and particularly to work well with Magento. </p>
<pre><code>$ sudo vim nginx.conf  
# Insert the following towards the bottom of the file (but inside the http block) 
map $scheme $fastcgi_https {
   default off;
   https on;
}

##
# Virtual Host Configs
##
include conf.d/*.conf;
include sites-enabled/*;
</code></pre>
<p>For each app just add a server block to sites-available, then symlink it to sites-enabled.</p>
<pre><code>$ sudo vim sites-available/magento.dev.conf
# ...     
$ cd sites-enabled
$ sudo ln -s ../sites-available/magento.dev.conf 001-magento.dev.conf
</code></pre>
<p>This is the server block definition I use for magento development, feel free to modify it for your needs.</p>
<pre><code>server {
    listen 80;
    listen 443 ssl;
    
    ssl_certificate     ssl/magento.dev.crt;
    ssl_certificate_key ssl/magento.dev.key;

    server_name magento.dev;
    root /Users/aaron/Sites/magento;

    location / {
        index index.html index.php; ## Allow a static html file to be shown first
        try_files $uri $uri/ @handler; ## If missing pass the URI to Magento&#39;s front handler
        expires 30d; ## Assume all files are cachable
    }

    ## These locations would be hidden by .htaccess normally
    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }
    location /shell/              { deny all; }

    ## Disable .htaccess and other hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    location ~ \.php$ { ## Execute PHP scripts
        if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss

        expires        off; ## Do not cache dynamic content
        fastcgi_intercept_errors on;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_param  HTTPS $fastcgi_https;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  MAGE_RUN_CODE default; ## Store code is defined in administration &gt; Configuration &gt; Manage Stores
        fastcgi_param  MAGE_RUN_TYPE store;
        proxy_read_timeout 120;
        proxy_connect_timeout 120;
        include        fastcgi_params; ## See /etc/nginx/fastcgi_params
    }
    
    location @handler { ## Magento uses a common front handler
        rewrite / /index.php;
    }
}
</code></pre>
<p>We&#39;ve said our application lives on a server called &#39;magento.dev&#39;. So let&#39;s tell our hosts file about that.</p>
<pre><code>$ vim /etc/hosts
# Insert or append to an existing line
# 127.0.0.1 localhost magento.dev
</code></pre>
<p>Last thing that needs to be done is setting up a selfsigned ssl certificate / key pair and storing them under /opt/local/etc/nginx/ssl</p>
<pre><code>$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myserver.key -out myserver.crt
$ sudo mv myserver.key /opt/local/etc/nginx/etc/ssl/magento.dev.key
$ sudo mv myserver.crt /opt/local/etc/nginx/etc/ssl/magento.dev.crt
</code></pre>
<p>Once that&#39;s done, we can start nginx.</p>
<pre><code>$ sudo port load nginx
</code></pre>
<h2>Web App Directory Config</h2>
<p>I keep my web apps living under /Users/aaron/Sites, but remember that every directory element in the path needs to have the executable bit set for all users (so the web server can traverse the directory tree). Literally this is a case of:</p>
<pre><code>$ chmod a+x /Users/aaron &amp;amp;&amp;amp; chmod a+x /Users/aaron/Sites
</code></pre>
<h2>Install Magento and TAF</h2>
<p><a href="https://github.com/netz98/n98-magerun">N98 Magerun</a> is the coolest thing to happen to Magento development since well, I can&#39;t remember. It singlehandedly relegated a few thousand lines of cobbled together bash script to the bin.</p>
<pre><code>$ cd /Users/aaron/Sites
$ curl -O magerun.phar https://github.com/netz98/n98-magerun/raw/master/n98-magerun.phar
$ chmod a+x magerun.phar
$ ./magerun.phar install
# Follow the directions and install to /Users/aaron/Sites/magento with base url http://magento.dev and database name &#39;magento&#39;.
</code></pre>
<p>After all that work, hitting <a href="http://magento.dev">http://magento.dev</a> <strong>should</strong> now bring up the magento demo store!</p>
<p>I&#39;ve been playing with <a href="https://github.com/magento/taf">Magento&#39;s Test Automation Framework</a> and it was the motivation for finally getting everything working properly natively.</p>
<p>TAF runs at a glacial pace and in my normal development environment (VirtualBox over NFS), the universe would have undergone heat death long before the TAF suite completed its running.</p>
<p>Unfortunately the documentation for TAF is a bit of a mess (I&#39;ll write about my experience with it soon), but what it offers - 1500 automated tests - is a pretty big attraction.</p>
<p>Installation is actually pretty easy. I am assuming you don&#39;t have git already installed (remember you can use port variants  to see what extension options are available):</p>
<pre><code>$ sudo port install git-core +bash_completion +credential_osxkeychain +doc +pcre +python27
$ sudo port install git-extras
$ cd /Users/aaron/Sites
$ git clone https://github.com/magento/taf taf
$ cd taf # /Users/aaron/Sites/taf
$ cp phpunit.xml.dist phpunit.xml
$ cp config/config.yml.dist config/config.yml
$ cd .. # /Users/aaron/Sites
$ curl -O selenium-server.jar http://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
</code></pre>
<p>To run the test suite open up a new terminal</p>
<pre><code>$ cd /Users/aaron/Sites
$ java -jar selenium-server.jar
</code></pre>
<p>Now the test suite is good to go</p>
<pre><code>$ cd /Users/aaron/Sites/taf
$ ./runtests.sh
</code></pre>
<p>The test suite takes a loooooong time, so go for a run or something.</p>
<p>Hopefully these steps help out other PHP developers suffering from OSX.</p>
]]></description><link>http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44973182283/a-guide-to-php-mysql-and-nginx-on-macports</guid><category><![CDATA[php]]></category><category><![CDATA[nginx]]></category><category><![CDATA[mysql]]></category><category><![CDATA[macports]]></category><category><![CDATA[osx]]></category><category><![CDATA[magento]]></category><category><![CDATA[pear]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 09 Mar 2013 22:44:00 GMT</pubDate></item><item><title><![CDATA[MySQL with PHP and Macports]]></title><description><![CDATA[<p>The PHP Mysql port is a little bit of a pain. By default Macports doesn&#39;t set a default mysql sock. That leads to an error something like this:</p>
<pre><code>SQLSTATE[HY000] [2002] No such file or directory.
</code></pre>
<p>You fix it by just appending the sock file for the mysql version you&#39;re using into the PHP mysql.ini file. I use mysql55 so to fix PHP I do this</p>
<pre><code>echo &#39;pdo_mysql.default_socket=/opt/local/var/run/mysql55/mysqld.sock&#39; | sudo tee /opt/local/var/db/mysql.ini
</code></pre>
]]></description><link>http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/44883310831/mysql-with-php-and-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[lamp]]></category><category><![CDATA[php]]></category><category><![CDATA[mysql]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 08 Mar 2013 21:28:00 GMT</pubDate></item><item><title><![CDATA[Using SSHFS With Public Key Credentials]]></title><description><![CDATA[<p>In the past I used to mess around with NFS over SSH but these days the FUSE options are much easier, except when you want to use a public key to authenticate with the remote host. In that case do this:</p>
<p><s>$ sshfs -o ssh_command=&quot;ssh -i ~/ssh_keys/<a href="mailto:aaron@awshost.pem">aaron@awshost.pem</a>&quot; <a href="mailto:aaron@aws.instance.com">aaron@aws.instance.com</a>:/var/www/ ~/Sites/awshost</s></p>
<p>Actually, right after I posted this, I realised there&#39;s a better way to do it. That is, use the &#39;IdentityFile&#39; option instead (as per the format of .ssh/config).</p>
<pre><code>$ sshfs -o &quot;IdentityFile=~/ssh_keys/aaron@awshost.pem&quot; aaron@aws.instance.com:/var/www/ ~/Sites/awshost
</code></pre>
<p>If you have any problems then add &#39;-o debug&#39; to the above command  to help track it down.</p>
]]></description><link>http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</link><guid isPermaLink="true">http://aaronbonner.io/post/44713623354/using-sshfs-with-public-key-credentials</guid><category><![CDATA[ssh]]></category><category><![CDATA[cli]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 06 Mar 2013 17:10:00 GMT</pubDate></item><item><title><![CDATA[This is why people moan about PHP]]></title><description><![CDATA[<blockquote>
<p>There are only two hard things in Computer Science: cache invalidation and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>I wanted to grab the last bit of a url that I knew would be the name of an image. I knew strstr well but that operates by giving you the remainder of a string that occurs after some needle in a string haystack. I wanted this behaviour, but only from the <em>last</em> instance of the needle.</p>
<p><a href="http://www.php.net/manual/en/function.strstr.php">strstr</a> — Returns part of haystack string starting from and including the first occurrence of needle to the end of haystack.</p>
<p><a href="http://www.php.net/manual/en/function.strrchr.php">strrchr</a> -This function returns the portion of haystack which starts at the last occurrence of needle and goes until the end of haystack.</p>
<p>Let&#39;s look at how these work with an example</p>
<pre><code>$url = &#39;http://www.google.com/a/b/c/d,img&#39;;
echo strrchr($url, &#39;/&#39;); // prints /d.img  
echo strstr($url, &#39;/&#39;);   // prints //www.google.com/a/b/c/d.img
</code></pre>
<p>Now I&#39;ve been programming in PHP for pushing on 12 years and this one still did my head in. The names of two very similar behaving functions bare little resemblance to each other.</p>
<p>At this point the arguments and criticisms over the core API have been exhausted and there&#39;s little that can/will be done. But I do wonder if it would be worth creating an object library to encapsulate primitive functions such as String, Integer, Array, Float etc., I&#39;m not sure how possible <a href="http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html">auto-boxing</a> is with PHP, and indeed if it&#39;s even a good idea. But definitely some object wrappers would help ease this API pain.</p>
]]></description><link>http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</link><guid isPermaLink="true">http://aaronbonner.io/post/44540877038/this-is-why-people-moan-about-php</guid><category><![CDATA[php]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 04 Mar 2013 14:27:00 GMT</pubDate></item><item><title><![CDATA[An Oral History of Robert Taylor]]></title><description><![CDATA[<p>A (very long) interview with Robert Taylor covering his life and career. Taylor is a computing visionary that oversaw innovations such as Personal Computing, GUIs and (inter)networking.</p>
]]></description><link>http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</link><guid isPermaLink="true">http://aaronbonner.io/post/44442509073/an-oral-history-of-robert-taylor</guid><category><![CDATA[xeroc parc]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 03 Mar 2013 08:48:28 GMT</pubDate></item><item><title><![CDATA[Dealers of Lightning - Book Review]]></title><description><![CDATA[<blockquote>
<p>The best way to predict the future is to invent it. -- Alan Kay</p>
</blockquote>
<p><img src="/images/tumblr_inline_mixj4nTW061qz4rgp.jpg" alt=""></p>
<p>Recently I have written a little bit about Smalltalk, and in my enthusiasm I got hold of a book called <a href="http://www.amazon.com/Dealers-Lightning-Xerox-PARC-Computer/dp/0887309895"><em>Dealers of Lightning</em></a> by Michael Hiltzik. It covers the rise and fall of Xerox&#39;s Palo Alto Research Center (PARC), the research center from which Smalltalk emerged.</p>
<p>I initially read it to learn more about the context in which Alan Kay imagined Smalltalk and to find out who was the Executive-X he mentioned in <em>The Early History of Smalltalk</em> (it was Jerry Elkind). However I ended up coming away with a lot more. In particular, a new appreciation for a number of scientists I previously knew very little about. Scientists that are almost single handedly responsible for the shape of modern computing.</p>
<p>I grew up in the 80s so I have no real personal appreciation for computing as it was before say, 1984. I had an IBM clone (an Amstrad) and a VIC20. At school we have Commodore 64s/128s to play with. So for me, a computer has always been something that sits on your desk, you turn it on and you type away and stuff pops up on the screen. But right up until the late 70s this paradigm was considered absurd. Wasteful even. It took the vision of Alan Kay and the technical genius of Chuck Thacker and Butler Lampson along with the almost unlimited cash of Xerox to realise.</p>
<p>The book opens up by transporting the reader back in time to the late 60s and lays out the genesis of PARC. It then proceeds in roughly chronological order with each chapter focusing on one of the scientists and/or their inventions.  The book closes by looking at some of the reasons why Xerox couldn&#39;t transform its research into viable products. Nominally the story is about what Xerox PARC did, however Hiltzik couches everything in terms of the scientists and it is his ability to bring these characters to life that makes the book so riveting to read. </p>
<p>One of the most striking individuals of the story is the <em>Impressario</em> Robert Taylor, a man who as much as anything can be considered the grandfather of the Internet (nee ARPANET). The Kays, Thackers and Lampsons of the story are the geniuses but genius needs direction and at times support. This is the role Bob Taylor played. The story of PARC for better and worse revolves around him and his relationship with the researchers that shared his vision of interactive computing and those whether in his or the other labs, or in management, that well didn&#39;t.</p>
<blockquote>
<p>The Computer Science Lab (CSL) was a collection of engineers who weighed everything pitilessly against the question: How will this get us closer to our goal? They had commited themselves to developing Xerox&#39;s Office of the Future and anything that diverted their attention or served an alternative goal had to be discarded or obliterated.</p>
</blockquote>
<p>It is Taylor&#39;s utterly single minded vision of interactive computing that drives much of the success and much of the drama of PARC. Taylor was in continual combat with the other labs for resources and funding and inevitably with his managers George Pake, Jerry Elkind and eventually Bill Spencer. But that was outside his lab. In it, he was the oil that kept the cogs turning and among his staff he was considered a unique and brilliant manager of researchers. It is on this skeleton of contradiction and conflict that the guts of the story of PARC hangs. </p>
<p>The book contains a number of particularly powerful scenes. Two particularly stuck out for me. The first is Alan Kay, his vision for a Personal Computer brusquely put down by CSL Manager Jerry Elkind, falling into a depression. Alan Kay is well known for his brilliance and verbal flourish but Hiltzik does well to also bring home his vulnerability in a way a modern reader would not expect. Kay would ultimately realise his vision of a Personal Computer - with the help of Taylor&#39;s CSL - while Elkind was seconded away from PARC on a Xerox taskforce. We tend to recall Kay&#39;s assertive (and largely proven) views on Computing. It is unexpected and moving then, particularly with the benefit of hindsight, to see him doubt himself and his ideas before they were fully realised.</p>
<p><img src="/images/tumblr_inline_mixj0h4FdK1qz4rgp.jpg" alt=""></p>
<p>The other scene involves Adele Goldberg, co-developer of Smalltalk, and her reactions to Apple&#39;s infamous raid on PARC. If you&#39;ve ever seen The Pirates of Silicon Valley you might have a feel for how this all went down. But Hiltzik&#39;s account of it conveys such a sense of dread and hopeless frustration that the movie never came close to recreating.</p>
<p>By the end of the book Taylor&#39;s time at PARC draws to a close and with his departure so too does the most storied era of PARC. Scarcely six months after Taylor&#39;s forced resignation, the majority of his lab also resign and either follow him to Digital Equipment Corporation (creators of the famous PDP series of minicomputers), or join one of the many startups blooming in Silicon Valley following the success of IBM and Apple&#39;s Personal Computer products.</p>
<p>Dealers is fundamentally a story about people that just happen to be in technology - rather than a book about technology itself. It is a human story. It is about what happens when you take the cream of a generation&#39;s scientific talent put them in one place and throw lots of money at them. It is about what happens when you combine a visionary maverick with the proneness to credentialism by academically minded administrators. It is about what happens when you have corporate management that want to embrace change but either do not understand it, or worse, fear it. </p>
<p>It is the book&#39;s focus on the people of Xerox and PARC particularly, their feelings, motivations and backgrounds that brings this extraordinary tale of modern computing&#39;s  birth to life.</p>
]]></description><link>http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</link><guid isPermaLink="true">http://aaronbonner.io/post/44213545007/dealers-of-lightning-book-review</guid><category><![CDATA[smalltalk]]></category><category><![CDATA[xeroc parc]]></category><category><![CDATA[book review]]></category><category><![CDATA[history]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 28 Feb 2013 12:07:00 GMT</pubDate></item><item><title><![CDATA[Magento and GoogleCheckout Woes - Free Products]]></title><description><![CDATA[<p>I fixed a nasty little bug in GoogleCheckout (now Wallet) today. Basically if a customer has a free or zero priced product in their cart, GoogleCheckout will return an error looking something like this:</p>
<blockquote>
<p>Google Checkout: Error parsing XML; message from parser is: cvc-datatype-valid.1.2.1: &#39;&#39; is not a valid value for &#39;decimal&#39;. </p>
</blockquote>
<p>I have developed custom modules which add free or bonus items to a customer&#39;s cart if they use coupons, meet certain cart criteria or belong to particular customer groups. Buy x, get y rules also work this way. So this is a nuisance. Luckily few customers opt to use GoogleCheckout, but still, I don’t Live with Broken Windows[1].</p>
<p>Chasing the problem down the call stack leads to app/code/core/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php and specifically the _getItemsXml() method.</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}
// ...
&lt;unit-price currency=&quot;{$this-&gt;getCurrency()}&quot;&gt;{$unitPrice}&lt;/unit-price&gt;
</code></pre>
<p>Now, if the product&#39;s baseprice is 0, then for some unfathomable reason it&#39;s set to &#39;&#39;, not 0. As the unit-price element expects a decimal value, an empty string fails validation.</p>
<p>The fix is pretty trivial</p>
<pre><code>$unitPrice = $item-&gt;getBaseCalculationPrice();
if (Mage::helper(&#39;weee&#39;)-&gt;includeInSubtotal()) {
    $unitPrice += $item-&gt;getBaseWeeeTaxAppliedAmount();
}

$unitPrice = ((float) $unitPrice &gt; 0) ? $unitPrice : 0.00;
</code></pre>
<p>The store I needed to fix only used US dollars so I haven&#39;t tested how the use of other currencies or locales might affect this fix.</p>
<p>To apply the fix, don&#39;t modify the core codepool, but instead take advantage of the local and community codepool&#39;s higher classloader priority[2] and place the amended code in app/code/local/Mage/GoogleCheckout/Model/Api/Xml/Checkout.php.</p>
<p>[1]: &#39;Don&#39;t Live With Broken Windows&#39; is a tip I first read about in <a href="http://pragprog.com/the-pragmatic-programmer">The Pragmatic Programmer</a>. It is used to help fight Software Entropy (software&#39;s tendency to lose structure over time). This concept has parallels with the real world as urban areas with broken windows tend to see higher levels of vandalism when compared to areas where windows are constantly maintained. </p>
<p>When you ignore small problems it becomes easier to let more significant problems slide too. Hence the rule of thumb, &#39;Dont Live With Broken Windows&#39;.</p>
<p>[2]: Magento resolves classes in this order local, community then core. This means if two classes have the name Mage_Core_Model_Foo one exists in local the other in core, then the version in local is used.</p>
]]></description><link>http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</link><guid isPermaLink="true">http://aaronbonner.io/post/43232091193/magento-and-googlecheckout-woes-free-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 16 Feb 2013 16:17:00 GMT</pubDate></item><item><title><![CDATA[Gracefully shutdown Nginx]]></title><description><![CDATA[<p>When a barman calls &#39;time&#39; at the pub, they are letting you finish your drink. Unfortunately the standard command to pull down Nginx on Ubuntu Precise is a little more aggressive. When it calls time, it snatches your unfinished beer away right there and then.</p>
<p>Thankfully there&#39;s a really simple way to socialise Nginx. It is by calling the nginx server command directly with the -s argument instead of using the /etc/init.d/nginx or service nginx commands.</p>
<p><em>-s</em> lets you send signals to the Nginx master process and Nginx behaves differently whether it receives a quit signal versus a term signal.</p>
<pre><code># terminate the nginx master process immediately
$ sudo nginx -s stop 
# terminate the nginx master process once all outstanding connections have been completed
$ sudo nginx -s quit 

$ abonner@avalanche:~$ ps aux | grep nginx
root      1063  0.0  0.0  88796  3432 ?        Ss   Jan21   0:00 nginx: master process /usr/sbin/nginx
www-data  9786  1.3  0.0  91564  7352 ?        S    Feb03 190:11 nginx: worker process is shutting down
www-data  9788  1.3  0.0  91288  7072 ?        S    Feb03 189:02 nginx: worker process is shutting down
www-data  9789  1.3  0.0  91160  6956 ?        S    Feb03 190:03 nginx: worker process is shutting down
</code></pre>
<p>Let you visitors finish their drink, don&#39;t terminate nginx on a production server using /etc/init.d/nginx stop or service nginx stop.</p>
<p>Read more about <a href="http://wiki.nginx.org/CommandLine">Nginx&#39;s command line options</a></p>
]]></description><link>http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</link><guid isPermaLink="true">http://aaronbonner.io/post/42992801181/gracefully-shutdown-nginx</guid><category><![CDATA[nginx]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 13 Feb 2013 09:37:00 GMT</pubDate></item><item><title><![CDATA[(Re)Reading the Classics]]></title><description><![CDATA[<p>Here we are already in the second week of February. I do wonder how many New Year Resolutions have survived the return to work?</p>
<p>This year I didn&#39;t bother with any. I did in November though, resolve to read more technical books, and to particularly focus on the &#39;classics&#39;. My motivation was stirred by Panagiotis Louridas&#39;s essay &#39;Rereading the Classics&#39; from the book <a href="http://www.amazon.co.uk/Beautiful-Architecture-Leading-Thinkers-Software/dp/059651798X">Beautiful Architecture</a>. In it, Louridas examines the structure of Smalltalk and tries to reason why it achieved greater lasting success blazing a trail for others to follow than as a practical working programming environment.</p>
<p><img src="/images/tumblr_inline_midf7eUQkN1qz4rgp.jpg" alt="Alto2 Running Smalltalk-80"></p>
<p>Louridas suggests that Smalltalk is a classic and by way of justifying why, he quotes Italo Calvino&#39;s <a href="http://www.amazon.co.uk/Why-Read-Classics-Penguin-Modern/dp/0141189703">Why Read the Classics (1986)</a></p>
<blockquote>
<p>The classics [...] exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.</p>
<p>A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives much pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.</p>
<p>The classics are books which, upon reading, we find even fresher, more unexpected, and more marvelous than we had thought from hearing about them.</p>
<p>A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.</p>
</blockquote>
<p>Smalltalk is a language that most of us have heard about but have rarely seen, and after being introduced to it I was inspired to start digging up any further literature I could find. The more that I read, the more that I appreciated its cleverness. Smalltalk can be eerily familiar, and as you begin to grok its syntax it is easy to recognise aspects that have inspired certain features of our &#39;modern&#39; OO languages. </p>
<p>It is striking that the language itself (and indeed much of the material written about it) is so old. The most modern version of Smalltalk is Smalltalk-80 and although there are modern implementations, the overwhelming bulk of the syntax and environment still cohere to the 1980 standard. Yet download <a href="http://www.pharo-project.org/home">Pharo</a>, read Alan Kay&#39;s <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk (1993)</a>, or Dan Ingalls&#39;s <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles behind Smalltalk (1981)</a> and it all still seems contemporary. The language itself, which was controversial at the time for eschewing ALGOL syntax, has aged very well. Ruby has maybe made them fashionable, but Smalltalk sported things like block closures over a quarter of a century ago. Reflection, Metaprogramming and dynamic typing - all in Smalltalk in the 70s. Even the idea of using virtual machines to host the programming and operational environment seems remarkably contemporary today, as we increasingly move to abstract our programs and programming environments from bare metal.</p>
<p>It is humbling to see so many ideas that we take for granted today already implemented on a platform that is over a quarter of a century old. It is like amazing at how the Egyptians built the pyramids. Sure we could do it now, no sweat. But we have so much more raw engineering knowledge to throw at the problem. Alan Kay, Dan Ingalls, Adele Goldberg and the rest of the team at the Xerox PARC Learning Research Group designed and implemented Smalltalk with less computing power than my fridge has today. Despite formalising most of the vocabulary for OO software development during the development of Smalltalk it&#39;s hard not to feel like some of the energy and innovation of Kay&#39;s thinking didn&#39;t survive the C++ and Java succession.</p>
<p><img src="/images/tumblr_inline_midf8iVUqC1qz4rgp.png" alt="Smalltalk 80 Environment"></p>
<p>Our discipline is still quite young when compared to traditional engineering or the sciences. Yet we seem to keep facing the same problems over and over again. The only difference is perhaps a few orders of abstraction, bigger piles of data and slightly more exotic technologies. But when I think that the fundamental concept of &#39;Agile&#39; or at least &#39;iterative&#39; development was doing the rounds in the 60s, it made me wonder what other insights are out there, buried in the forgotten past.</p>
<p>Maybe one reason we tend to forget what others have learned is that <a href="http://www.codinghorror.com/blog/2008/04/programmers-dont-read-books----but-you-should.html">the average developer only reads one technical book per year</a>. That means a sizeable percentage of software professionals do not actually read any! I think if I average over my professional working career (after graduating at the end of 2004) I would probably be batting one per year too. If I were to look over the last three years, maybe two per year. I wonder how many mistakes I might have avoided in that period had I read Fred Brooks&#39;s Mythical Man Month in 2004 rather than 2012.</p>
<p>My excuse has always been a lack of quality time. Last year in July I left my fulltime job for the world of consulting/freelancing. I imagined it would be easier to read more. In practice it has been, but not as easy as I thought or hoped.</p>
<p>I have a young daughter, my partner has returned to work and despite working from home, I still put in 50-60 hour weeks. It doesn&#39;t leave a lot of spare time and what spare time there is, is usually late at night when it&#39;s hard to concentrate.</p>
<p>So since I made a determined effort to read more, I&#39;ve read four books cover to cover, and cherry picked bits out of another four. It feels good and the trick I&#39;ve found is to read a chapter at a time, whenever you can. Whether it&#39;s just before dinner, over coffee/lunch, waiting at the supermarket or just before bed. I found that by reading, even if was just a little bit, every day, I was starting to get through entire books.</p>
<p>It requires conscious effort though, and some material is more suited  to this style of reading than others. After a long day just before bed it&#39;s pointless trying to delve into <a href="http://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871">SICP</a>. It helps to read certain books during the work day, for example I&#39;ve been re-reading <a href="http://www.amazon.co.uk/Growing-Object-Oriented-Software-Guided-Signature/dp/0321503627">GOOS</a> and going through <a href="http://www.amazon.co.uk/Driven-Development-Addison-Wesley-Signature-Series/dp/0321146530/">Kent Beck&#39;s TDD by Example</a> over lunch times. I actively practice TDD so reading a chapter from one of these books midday helps me relate it directly to what I&#39;m working on when I go back into the office.</p>
<p>I have a huge reading list setup in Google Reader, and I&#39;m starting to think it&#39;s distracting. I have a slight OCD in that I need to keep the unread count at zero. At the end of a thirty minute work sprint, I would take five minutes to quickly flick through the list. It is distracting actually, and most of the content is superficial. The benefit of reading a book over a blog is the tendancy for a book to have its ideas more fully formed and logically structured. I realise the irony saying this while writing a blog myself. I feel that blogs have their place, but I have been spending far more time reading blogs than reading books. I am now leaving my Google Reader list unread longer and after a few 30 minute sprints picking up a book instead.</p>
<p>As I wrote above, Louridas inspired me to learn more about Smalltalk, and any research into the language leads to Alan Kay&#39;s ACM paper <a href="http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_Abstract.html">The Early History of Smalltalk</a>. The paper beyond providing a wonderful insight into the language and the Xerox PARC is the source of some great quotes. One of my favourites goes:</p>
<blockquote>
<p>Where Newton said he saw further by standing on the shoulders of giants, computer scientists all too often stand on each other’s toes.</p>
</blockquote>
<p>In Software Engineering, we are so busy looking forward that we don&#39;t look back often enough. There is such a rich wealth of knowledge out there already considered and documented. I think we all should make more of an effort to re-discover it.</p>
]]></description><link>http://aaronbonner.io/post/42879907851/rereading-the-classics</link><guid isPermaLink="true">http://aaronbonner.io/post/42879907851/rereading-the-classics</guid><category><![CDATA[reading]]></category><category><![CDATA[smalltalk]]></category><category><![CDATA[beautiful-architecture]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 12 Feb 2013 00:07:00 GMT</pubDate></item><item><title><![CDATA[Some Hidden Unix / Shell Gems]]></title><description><![CDATA[<p>It is 2013, we (still) don&#39;t have flying cars, or hoverboards, AND as developers, we still use terminals to interact with our operating system. So every so often I like to browse through commandlinefu.com and try and pick up any little tidbits which improve my command line efficiency. </p>
<p>Here&#39;s a small selection I have picked up recently that I didn&#39;t know.</p>
<pre><code>sudo !! 
</code></pre>
<p>Run the previous command as sudo. This is great when you realise you needed to run something as root.</p>
<pre><code>ctrl-x e
</code></pre>
<p>Open up $EDITOR to enter in a long command. In my setup it fires up vim. This is great for some of the long rails commands you need to create controllers or models.</p>
<pre><code>cat /etc/passwd | column -s&#39;:&#39; -t 
</code></pre>
<p>Column, columnates input, the -t argument will format standard input into a table and -s lets you specify an arbitrary field delimiter. For unformatted input this is very handy.</p>
<p>These next few are specific to zsh, and while I do love bash, since switching to zsh I haven&#39;t really looked back. It&#39;s things like this that when you work with a terminal every single day, you can&#39;t give up.</p>
<pre><code>aaron@tempest ~ $ d                                 
0	~
aaron@tempest ~ $ cd /etc
aaron@tempest /etc $ d
0	/etc
1	~
aaron@tempest /etc $ 1
~
aaron@tempest ~ $  
</code></pre>
<p>The &#39;d&#39; command lists the directory stack, and then entering an integer will switch you directly to  the directory index in the stack. It is a killer app.</p>
<p>Moving directories also is very pleasant in zsh. Use &#39;..&#39; to move up a directory, and simply type the name of the directory in, to move into a directory.</p>
<pre><code>aaron@tempest ~ $ ..       
aaron@tempest /Users $ aaron
aaron@tempest ~ $ 
</code></pre>
<p>This last one is a trick I&#39;ve know for a few years, I don&#39;t know how much time this has saved me exactly, but I use it every single day.</p>
<p>In vim, if you&#39;re editing a file that requires root (or any other user) permissions, you can write the file by doing</p>
<pre><code>:w !sudo tee %
</code></pre>
<p>I use it so much that I&#39;ve set up a leader key binding in <a href="https://raw.github.com/ajbonner/unix/master/vim/vimrc">my .vimrc</a></p>
<pre><code>nnoremap &lt;leader&gt;sr :w !sudo tee %&lt;CR&gt;
</code></pre>
<p>There&#39;s nothing more annoying than making lengthy changes to a config file, go to write it and getting permission denied...</p>
<p>I make all <a href="https://github.com/ajbonner/unix">my configs</a> available online at github, if you&#39;re interested in seeing how I setup my environment.</p>
]]></description><link>http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</link><guid isPermaLink="true">http://aaronbonner.io/post/41095683797/some-hidden-unix-shell-gems</guid><category><![CDATA[unix]]></category><category><![CDATA[cli]]></category><category><![CDATA[zsh]]></category><category><![CDATA[bash]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jan 2013 10:56:00 GMT</pubDate></item><item><title><![CDATA[Building the Mysql2 Gem with Macports]]></title><description><![CDATA[<p>Macports puts its libraries in non-standard locations, so to build the mysql2 gem on an OSX computer, you will need to do a little bit of extra work to ensure that gem calls make with appropriate options.</p>
<p>To cut a short story, very short, you do this (at least if you have macports in /opt/local (the default), and are using the mysql55 package).</p>
<pre><code>$ gem install mysql2 -- --with-mysql-lib=/opt/local/lib/mysql55/mysql --with-mysql-include=/opt/local/include/mysql55/mysql 
</code></pre>
]]></description><link>http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</link><guid isPermaLink="true">http://aaronbonner.io/post/38371408148/building-the-mysql2-gem-with-macports</guid><category><![CDATA[macports]]></category><category><![CDATA[mysql]]></category><category><![CDATA[osx]]></category><category><![CDATA[ruby]]></category><category><![CDATA[rubygems]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 Dec 2012 07:51:18 GMT</pubDate></item><item><title><![CDATA[Remote CLI XDebugging with PHPUnit]]></title><description><![CDATA[<p>There&#39;s a peculiar issue right now with PHPUnit where it will not respect php.ini arguments supplied to it on the commandline (i.e. supplying -d arguments).</p>
<p>This matters a lot when you want to use xdebug on a project that runs off a virtual machine, or even perhaps a remote server.</p>
<p>The typical pattern (when using PHPStorm in my case) to invoke a remote cli debugging session is to set an environment variable telling the IDE what server configuration to use, and to tell PHP what remote host to connect to.</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; php -dxdebug.remote_host=192.168.0.1 myphpscript.php
</code></pre>
<p>Now this will work fine, however if we want to debug during a phpunit test normally you would do this</p>
<pre><code>$ PHP_IDE_CONFIG=&#39;serverName=mydevmachine.local&#39; phpunit -dxdebug.remote_host=192.168.0.1 -c phpunit.xml
</code></pre>
<p>Unfortunately this doesn&#39;t appear to work at the moment (version  3.7.9). If I use the xdebug test client, I can see xdebug trying to connect to the localhost, ignoring what I&#39;ve told PHPUnit. I&#39;ll look into this a bit more later, but I suspect PHPUnit isn&#39;t passing on the php.ini settings in a timely fashion for xdebug to hook into.</p>
<p>The solution to this problem is to make use of ssh port forwarding. This works exactly the same for a virtual machine as it would for a remote host, which makes xdebugging on a production machine (hopefully only ever in an emergency!!!) much more straight forward (and less insecure).</p>
<pre><code>$ ssh -R 9000:localhost:9000 myvm.local
</code></pre>
<p>This sets up myvm.local to forward all connections to its localhost on port 9000 to the remote client&#39;s port 9000. When xdebug goes to connect to localhost:9000, it ends up actually connecting to mydevmachine.local:9000.</p>
<p>It&#39;s a bit of a hack, but a time saving one. The other alternative is Vim and its xdebug plugin. This isn&#39;t a bad alternative. But once you&#39;ve experienced the power of PHPStorm&#39;s debugging implementation it&#39;s hard to go back.</p>
]]></description><link>http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/35630984902/remote-cli-xdebugging-with-phpunit</guid><category><![CDATA[xdebug]]></category><category><![CDATA[phpunit]]></category><category><![CDATA[ssh]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Nov 2012 11:33:00 GMT</pubDate></item><item><title><![CDATA[A Great Time to be a PHP Developer]]></title><description><![CDATA[<h2>Embrace change</h2>
<p>It has been an exciting time to be a <a href="http://www.php.net">PHP</a> Developer these past twelve months, PHP 5.3 is now rock solid and PHP 5.4 is getting there. Both releases significantly modernise elements of the language, closing the gap between PHP and the offerings of more &#39;in vogue&#39; languages.</p>
<p>In technology we often see change happen in sudden, explosive steps. Often it seems to coincide with developments a technology&#39;s ecosystem or among its competitors. For PHP the first major kick was the rapid rise in popularity of <a href="http://www.smalltalk.org">Object Oriented Programming</a> in the early 00s. This led to PHP 5&#39;s radically overhauled OO implementation in 2004. The next kick, I feel, came in 2005 when <a href="http://rubyonrails.org">Ruby on Rails</a> exploded into everyone&#39;s consciousness. RoR provided a full stack web development platform that drastically simplified creating complex web applications. The PHP community responded in kind with a number of &#39;fullfat&#39; <a href="http://c2.com/cgi/wiki?ModelViewController">Model View Controller (MVC)</a> Frameworks, the most successful being <a href="http://framework.zend.com">Zend&#39;s</a> and <a href="http://symfony.org">Symfony</a>.</p>
<h2>It all depends</h2>
<p>The arrival of PHP 5.3 and features like <a href="http://www.php.net/manual/en/language.namespaces.php">Namespaces</a>, <a href="http://www.php.net/manual/en/book.phar.php">PHAR</a>, <a href="http://www.php.net/manual/en/functions.anonymous.php">Closures</a> and the ubiquity of <a href="http://github.org">Github</a> is having the effect of giving PHP a new kick, and the results are starting to make themselves felt. We now have second generation frameworks from Zend and Symfony leveraging these technologies. </p>
<p>One problem remains though, and that is managing and distributing dependencies. Modern web development platforms all now have robust dependency management tools available and in the PHP camp, <a href="http://pear.php.org">PEAR</a> wasn&#39;t really cutting it. </p>
<p>The success of <a href="http://symfony.com/blog/symfony-2-0">Symfony2</a> in particular, with its emphasis on high quality, modular components, forced PHP developers to address how they bundled and distributed library code.</p>
<p>Luckily for us, the guys behind <a href="http://getcomposer.org">Composer</a>, <a href="http://gembundler.com">(again, taking considerable cues from the Ruby community)</a> have licked it. Composer, in tandem with Symfony2 components  allow PHP Developers to confidently build on top of other developers&#39; libraries.</p>
<h2>Do we really need another packaging tool?</h2>
<p>Why did we need another package and dependency management tool anyway? What really, is wrong with PEAR? Well, if we wind the clock way back to 1999 when Netscape Communicator was still the most popular web browser and Google had just moved out of Susan Wojcicki&#39;s garage, PEAR was conceived as PHP&#39;s answer to PERL&#39;s CPAN. Despite some strident efforts, it never really managed to become the most pleasant package manager to work with: rigid, elitist and worst of all, difficult for end-users. PEAR&#39;s age strictly speaking is not the problem, but its centralised nature is a bottleneck and there is no straightforward way to handle two packages with varying dependencies. For example: say package x, requires stable package y. Package z requires beta package y. You can&#39;t install both. Dependency and package management has moved on a long way since 1999.</p>
<p><img src="/images/tumblr_mddw3mshKk1qac7a1.jpg" alt="PHP packaging has been broken for a long time"></p>
<p>Over time PEAR&#39;s shortcomings have led to a graveyard of abandoned packages, code of at best variable and at worst, dubious, quality, and a community lacking in any sort of dynamism. If you make something easy, people will use it. PEAR is difficult to use for developers and users alike.</p>
<p>Composer democratises (in the best sense) things and puts full control of dependencies in the hands of library developers. Free to pick and chose code they want to use. Free from having to worry about navigating the PEAR jungle. Here the rise and rise of Github has been key. Composer can sit over the top of code distribution services provided by Github, or it can use its default Packagist repository. This removes the need for libraries to live in a blessed canonical repository or for developers to host it themselves. </p>
<h2>... profit?</h2>
<p>There&#39;s no compelling need now to constantly rewrite basic library components (I think we&#39;ve finally licked what ought to be the basic issue of class loading!), Free of the shackles of PEAR, we are witnessing an explosion of high quality PHP frameworks, libraries and utilities.</p>
<p><a href="http://phpspec.net">PHPSpec</a>, <a href="http://behat.org">Behat</a>, <a href="http://twig.sensiolabs.org/">Twig</a>, <a href="https://github.com/padraic/mockery">Mockery</a>, <a href="doctrine-project.org">Doctrine</a>, are just a few that immediately spring to mind. Some (such as Doctrine) have been around a while. However the advances PHP 5.3 brought to the table have significantly helped improve the utility of these projects.</p>
<p>Anyway, so (after a fashion) I come to the tool that motivated me to write this post, <a href="https://github.com/netz98/n98-magerun">n98-magerun</a>.</p>
<p>The name is horrible, but the tool itself is brilliant. In short, it&#39;s Drush for <a href="http://www.magentocommerce.com">Magento</a> and it&#39;s wonderful. It is one of those tools that makes you wonder what on earth you did before it. </p>
<p>I have a folder full of bash scripts, cobbled together to help automate the mind-numbing process of managing Magento installations. Over the course of a few months <a href="https://twitter.com/cmuench">Christian Münch</a> and friends have overseen a small tool quickly develop into the kind of utility we&#39;ve all wanted but never had the time/patience to build ourselves.</p>
<p>Magerun is elegantly simple for the user, and cleanly extendable by developers. It is a perfect illustration of why it&#39;s such a great time to be a PHP developer. Better dependency management, easy distribution, modular libraries and powerful language syntax have all came together to let someone with an itch, scratch it quickly and effectively.</p>
<p>It has become several orders of magnitude easier to develop, package and distribute PHP libraries and utilities. The result of this leap forward is a brilliant tool that helps Magento developers dramatically increase their productivity.</p>
]]></description><link>http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</link><guid isPermaLink="true">http://aaronbonner.io/post/35570860390/a-great-time-to-be-a-php-developer</guid><category><![CDATA[composer]]></category><category><![CDATA[frameworks]]></category><category><![CDATA[magerun]]></category><category><![CDATA[n98-magerun]]></category><category><![CDATA[pear]]></category><category><![CDATA[php]]></category><category><![CDATA[symfony]]></category><category><![CDATA[Zend Framework]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Nov 2012 17:28:00 GMT</pubDate></item><item><title><![CDATA[MageRun and Xmllint: Pretty Printing Magento XML Config]]></title><description><![CDATA[<p><strong>Just a quick note, as you may notice from the comments, Magerun now pretty prints the xml output by default. It appears DomDocument requires preserveWhitespace = false in order to correctly reformat output. Thanks to Christian for sorting it all out!</strong></p>
<p>I&#39;ll be writing about how awesome <a href="https://github.com/netz98/n98-magerun">Magerun</a> is shortly, but just one of its cool features is the ability to dump out a merged version of Magento&#39;s config.</p>
<p>This is extremely helpful when trying to resolve conflicts between modules, or figure out what bit of configuration is taking precedence.</p>
<p>The resulting xml though is pretty raw and unformatted, but <a href="http://xmlsoft.org/xmllint.html">xmllint</a> can fix that.</p>
<p>Xmllint expects a file to work with and cannot use piped input. So we need to use bash&#39;s <a href="http://www.gnu.org/software/bash/manual/bash.html#Process-Substitution">Process Substitution</a> feature to avoid having to create temporary files.</p>
<pre><code>$ xmllint --format &lt;(magerun config:dump)
</code></pre>
<p>So, magerun and xmllint, a simple way to get a formatted, easy to examine view of how Magento is putting your install&#39;s configuration together.</p>
]]></description><link>http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</link><guid isPermaLink="true">http://aaronbonner.io/post/35357732777/magerun-and-xmllint-pretty-printing-magento-xml</guid><category><![CDATA[magento]]></category><category><![CDATA[magerun]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Nov 2012 21:26:00 GMT</pubDate></item><item><title><![CDATA[Git --name-only considered helpful]]></title><description><![CDATA[<p>A number of git commands take the --name-only argument which can help give you an overview of what is going on between two branches, or in a specific commit.</p>
<pre><code>$ git show --name-only &lt;commit&gt;
</code></pre>
<p>This will give you a list of affected files in commit <commit></p>
<p>Alternatively if you don&#39;t care what differs in the specific contents between two branches, and only want to see different files you can do</p>
<pre><code>$ git diff master..origin/master --name-only
</code></pre>
<p>This will show you the list of files that are different between your local master branch and the remote master branch. Handy if you have just done a git fetch and want to see what&#39;s different before merging or rebasing.</p>
]]></description><link>http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</link><guid isPermaLink="true">http://aaronbonner.io/post/35200205104/git-name-only-considered-helpful</guid><category><![CDATA[git scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Nov 2012 13:27:21 GMT</pubDate></item><item><title><![CDATA[Reset MySQLs Root Password]]></title><description><![CDATA[<p>If for some reason you have forgotten the root password for an existing mysql installation you can recover the account by starting mysqld with the --skip-grant option. This is roughly analogous to starting a Unix system in single user mode. </p>
<p>First thing, shut down the running instance and then restart it directly</p>
<pre><code>$ sudo -u &lt;mysql_user&gt; mysqld_safe --skip-grant-tables --skip-networking
</code></pre>
<p>The --skip-networking option is important, as by skipping the grant tables, any user can connect to the running mysqld service, will full permissions.</p>
<p>Once you&#39;ve started the server up, login without a password, and issue an update query to the mysql.user table.</p>
<pre><code>$ mysql -uroot mysql
mysql&gt; UPDATE user SET password=password(&#39;newpassword&#39;) WHERE User = &#39;root&#39;
</code></pre>
<p>Close down mysqld and restart. You&#39;re good to go.</p>
]]></description><link>http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</link><guid isPermaLink="true">http://aaronbonner.io/post/34510953992/reset-mysqls-root-password</guid><category><![CDATA[mysql]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 28 Oct 2012 20:36:00 GMT</pubDate></item><item><title><![CDATA[Rails Should be more Worried about Becoming the OLD PHP]]></title><description><![CDATA[<p>TL;DR Basically it is all Google&#39;s fault.</p>
<p>We&#39;ve seen some pretty epic PHP rants this year, probably the most famous among them are <a href="http://me.veekun.com/blog/2012/04/09/php-a-fractal-of-bad-design/">PHP a Fractal of Bad Design</a>, and Jeff Atwood&#39;s latest (in what seems to be a biennial broadside) <a href="http://www.codinghorror.com/blog/2012/06/the-php-singularity.html">The PHP Singularity</a>.</p>
<p>The common thread in these rants is incredulity that anyone would, in 2012, write new code in PHP. There&#39;s a lot of reasons why someone might write greenfield PHP code in 2012. But equally (and this is said as a decade long PHP programmer) I have to admit that plenty of the criticisms levelled at PHP are valid. Yet, for the most part they just don&#39;t, <a href="http://ilikekillnerds.com/2012/05/php-sucks-so-what/">particularly</a>, <a href="http://fabien.potencier.org/article/64/php-is-much-better-than-you-think">matter</a>.</p>
<p>One criticism that is wholly invalid yet comes up time and time again, is that using PHP intrinsically leads to bad code. I don&#39;t feel this is an inherent trait of PHP itself, so much symptomatic of the popularity and low barrier to entry of PHP. Basically there&#39;s more examples of bad code out there compared to pretty much any other platform because there&#39;s simply more code out there, written by programmers of wildly varying skill. The other problem, is that PHP came into existence as a scratch to a C programmer&#39;s itch. PHP was developed at a time where people still actually wrote web applications in C and where the stateless nature of HTTP was relatively respected. PHP, like the web itself, has moved on dramatically since then.</p>
<p>A modern PHP 5.4 webapp looks about as similar to an early 00s PHP 4 webapp as Scala does to Java. Yet many critics when slating the language appear to be code archeologists, excavating pre-historic practices that went out of favour long-ago.</p>
<p>This can be partially forgiven, because owing to the age of the language there&#39;s plenty of out of date information out there with high rankings in Google. The ubiquitous <a href="http://www.w3schools.com">w3schools</a> is an unfortunate example of <a href="http://www.w3schools.com/php/php_filter.asp">bad practices</a> coming well ahead of sites with more modern approaches to solving problems in PHP.</p>
<p>So &#39;New PHP&#39; is very different to &#39;Old PHP&#39;. But the &#39;Old PHP&#39; is what most people seem to find when searching in Google and this confuses people.</p>
<p>We see this manifested in blogs like <a href="http://roytomeij.com/2012/will-rails-be-the-new-php.html">Will Rails become the new PHP</a>. This blog has a number of spectacular shortcomings, most egregiously the author&#39;s horribly naive view of the PHP community, but the interesting one is his ignorance of the power of Google.</p>
<p>There&#39;s plenty of support out there for budding PHP programmers, whether on the web, on forums, or IRC. There are countless, well attended, supported and growing conferences, meetups and the like for PHP programmers. One thing that a community cannot do, is force Google to nuke w3schools&#39; PageRank. Which means that brilliant efforts like <a href="http://phptherightway.com">PHP The Right Way</a> get swamped by old, incorrect and at times dangerous dreck.</p>
<p>And what the &#39;Will Rails become the new PHP&#39; author perhaps hasn&#39;t realised, is Rails, at least in the terms he&#39;s trying to couch it, has already become the &#39;New PHP&#39;. I am a novice Rails developer, I like to hack around in it as it can be quite fun to spike out solutions. What I&#39;ve been struck by, is the sheer amount of bad advice out there. Advice novices will come across, if they turn to Google for help.</p>
<p>If you&#39;re well versed in a platform you learn through brutal experience what works, what doesn&#39;t and your nose is finely tuned to bullshit. When I read a PHP article I know instinctively if what I am reading is reliable. But with Rails, as a novice, I don&#39;t quite have that sense, beyond my own background experience with other programming languages.</p>
<p>So let&#39;s look at an example. I&#39;ve been working on a dead simple Rails authentication webservice. It listens for HTTP requests for /login, /logout, /session, etc., and emits either XML or JSON in response. I&#39;m using the <strong>respond_to</strong> method to serve out these responses. Unfortunately what I found is if I request a route that does not exist, I get an HTML error back. This doesn&#39;t make a lot of sense for a webservice that otherwise speaks XML and JSON.</p>
<p>Other global exceptions similarly respond with HTML. I don&#39;t want to wrap every action up in a begin/rescue block and there is certainly no way to intercept router exceptions in actions anyway. So I needed to learn how to catch global exceptions.</p>
<p>In my journey of (Google) discovery I came across <a href="http://torqueo.net/proper-catching-controller-level-exceptions-in-rails/">this blog post</a> and appeared to hit paydirt. The advice appears to be legit, hell someone in the rails community even featured it in a podcast. So on the face of it this seems good. But that switch statement sure is smelly. Does it, really, need to be this hard? </p>
<script src="https://gist.github.com/3891892.js?file=application_controller.rb"></script>

<p>Now my general purpose programming brain recognised that this code while solving my problem, is not ideal. And why is it not ideal? This one method sure has a lot of responsibility. Method names with plurals in them are usually a code smell. Over time as more specific exceptions need to be handled I would end up with code that is as easy to read as Goethe&#39;s Faust, photocopied and in the original Gothic script (not easy). What we have here is a <a href="http://c2.com/cgi/wiki?GodMethod">God Method</a> in training.</p>
<p>Now, what if I am a Ruby/Rails/Programming novice? I have got plenty of other stuff to learn, I&#39;m going to go right here and <a href="http://c2.com/cgi/wiki?CargoCult">Cargo Cult</a> this code into my webapp and move on. Just like all the rookie PHP coders do, right?</p>
<p>Well, I didn&#39;t do that. I saw that this <strong>rescue_from</strong> method was pretty awesome and so I went to the <a href="http://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html">Rails API docs</a> to look it up. </p>
<p>API docs for any language are pretty terse, but what jumped out at me was this line: </p>
<pre><code>“Handlers are inherited. They are searched from right to left, from bottom to top, and up the hierarchy. The handler of the first class for which exception.is_a?(klass) holds true is the one invoked, if any.”
</code></pre>
<p>This isn&#39;t great documentation admittedly, but it means basically, if you put <strong>rescue_from Exception</strong> at the bottom of the list of <strong>rescue_from</strong> handlers in your application_controller.rb file, then since everything derives from <strong>Exception</strong>, nothing else will get a look in (Rails will look at the handlers from the bottom up).The author of that helpful blog we found didn&#39;t realise this, and so his solution was needlessly complicated.</p>
<p>What can we learn from this? Well Rails programmers certainly live in a glass house and shouldn&#39;t throw stones is one thing. But on a slightly less trollish note, there is a problem here for all novice programmers that turn to Google to help them solve problems. The answers on Google are usually either wrong, or at best, incomplete. As the web gets older, bad and out of date advice piles up making it much harder for novices to find good advice. </p>
<p>Knocking a language for this phenomenon (or a framework, seriously, whatever) is more than a little ignorant and doesn&#39;t solve the problem. Efforts like <a href="http://phptherightway.com">PHP The Right Way</a> is how PHP is trying to fix it. If Rails really doesn&#39;t want to be the &#39;Old PHP&#39;, they need to realise it&#39;s less to do with languages and platforms, and more about SEO.</p>
]]></description><link>http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</link><guid isPermaLink="true">http://aaronbonner.io/post/33633901163/rails-should-be-more-worried-about-becoming-the</guid><category><![CDATA[languages]]></category><category><![CDATA[php]]></category><category><![CDATA[rails]]></category><category><![CDATA[ruby]]></category><category><![CDATA[google]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 15 Oct 2012 10:24:00 GMT</pubDate></item><item><title><![CDATA[Update Git Remote Branches List]]></title><description><![CDATA[<p>Over time, a remote will have branches added and deleted. Your local working snapshot can often get littered with stale, now removed branches.</p>
<p>To see what branches your local repo things exists you do something like this:</p>
<pre><code>$ git branch -rv
&gt; origin/1620-upgrade  2e0cc56 Ignore active local.xml from vc
&gt; origin/HEAD          -&gt; origin/master
&gt; origin/cas-sso       2351be5 Add gateway logiin and logout support
&gt; origin/giveaways   63daf5a Use cms blocks for banner placements
&gt; origin/master        496c975 Merge affiliate module
&gt; origin/newskin      d7220c9 Optimise skin and ui images
&gt; origin/release       496c975 Merge affiliate module
</code></pre>
<p>So this is my local Magento git repository, many of the branches here are now defunct and no longer in the remote (i.e. I had previously had used $ git push origin :branch from another host)</p>
<p>To refresh then I need to prune my branches list. The git incantation to do this is</p>
<pre><code>$ git remote prune origin
&gt; Pruning origin
&gt; URL: dev@vcs:git/store.git
* [pruned] origin/1620-upgrade
* [pruned] origin/giveaways
* [pruned] origin/newskin
</code></pre>
<p>Looking at the remote branch list again:</p>
<pre><code>$ git branch -rv
&gt; origin/HEAD    -&gt; origin/master
&gt; origin/cas-sso 2351be5 Add gateway logiin and logout support
&gt; origin/master  496c975 Merge affiliate module
&gt; origin/release 496c975 Merge affiliate module
</code></pre>
]]></description><link>http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</link><guid isPermaLink="true">http://aaronbonner.io/post/33357135230/update-git-remote-branches-list</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:53:58 GMT</pubDate></item><item><title><![CDATA[Adding New Magento Cache Types]]></title><description><![CDATA[<p><a href="http://magento-quickies.tumblr.com/post/32618242167/adding-new-magento-cache-types" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>A simple configuration recipe for adding new cache tags to the Magento backend&#8217;s &#8220;clear cache&#8221; feature.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</link><guid isPermaLink="true">http://aaronbonner.io/post/33356950715/adding-new-magento-cache-types</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Oct 2012 09:44:05 GMT</pubDate></item><item><title><![CDATA[Disabling Magento's DB Logs]]></title><description><![CDATA[<p>If you&#39;ve ever been responsible for a busy Magento store, you will inevitably run into issues with the various log_* tables getting too big and caning your database.</p>
<p>In theory the Magento cron subsystem should keep a lid on these tables growing too big, but I avoid using Magento cron, preferring to handle that myself directly via crontab tasks.</p>
<p>The other option is to write your own table cleaning script (or copy one from somewhere), and this will work too. But it&#39;s annoying, if you don&#39;t want this log data, why write it in the first place.</p>
<p>So my solution is to disable it by removing the observer events that perform the logging. </p>
<script src="https://gist.github.com/3608447.js?file=local.xml"></script>

<p>I have this in my local.xml which takes precedence over other nodes in the config and therefore overwrites them. Here, by setting the observer to be the string &#39;disabled&#39;, the existing observer event is removed and replaced with something that will never be fired.</p>
<p>Now, you don&#39;t need to worry about periodically cleaning out your database, nor do you need to fear a 3am text message from your production DB servers screaming about the disk being full...</p>
]]></description><link>http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</link><guid isPermaLink="true">http://aaronbonner.io/post/30791731986/disabling-magentos-db-logs</guid><category><![CDATA[magento]]></category><category><![CDATA[logging]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 03 Sep 2012 10:40:00 GMT</pubDate></item><item><title><![CDATA[Magento CatalogSearch does not escape Breadcrumbs]]></title><description><![CDATA[<p>Ahh a little WTF to start the morning.</p>
<p>I&#39;m going through some PCI scan results this morning, and in the main it&#39;s going well, but I got a couple XSS hits on our catalogsearch pages. This is odd, I think. I&#39;ve audited these pages, they definitely get routed through magento&#39;s escaping code. </p>
<p>On closer examination it turned out the form was okay, it was via the breadcrumbs, that unescaped input was getting into the wild.</p>
<p>I&#39;m running Mage 1.6.x so this code may look a little different if you&#39;re running 1.7</p>
<p>Take a look at app/code/core/Mage/CatalogSearch/Block/Result.php, and specifically at the prepareLayout() method:</p>
<script src="https://gist.github.com/3524039.js?file=Result.php"></script>

<p>Now if you look at line 11, if breadcrumbs are enabled, unescaped input is happily added ready for output.</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getQueryText());
</code></pre>
<p>This fix is easy, replace line 10 with:</p>
<pre><code>$title = $this-&gt;__(&quot;Search results for: &#39;%s&#39;&quot;, $this-&gt;helper(&#39;catalogsearch&#39;)-&gt;getEscapedQueryText());
</code></pre>
<p>This is a really neat example of the evils of duplication and where bad programming practice can lead to real world problems. I am speculating, but it seems reasonable to infer that the original programmer got trigger happy with the copy &amp; paste keys. Later, at some point you could imagine another engineer coming in to XSS safe the code fixed one bit, but (and programmers are human) missed the other (exactly the same line), and we end up with an issue like this.</p>
<p>Personally, I patched the file as described above and stuck it in app/code/local/Mage to override the core code pool version.</p>
]]></description><link>http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</link><guid isPermaLink="true">http://aaronbonner.io/post/30510835544/magento-catalogsearch-does-not-escape-breadcrumbs</guid><category><![CDATA[magento]]></category><category><![CDATA[xss]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Aug 2012 08:16:00 GMT</pubDate></item><item><title><![CDATA[Build a Chef Gem From Source]]></title><description><![CDATA[<p>I get really frustrated with Ruby packages, they promise so much and when on that special day the moon is aligned with Mars, it all just works, and life is great.</p>
<p>Unfortunately this doesn&#39;t happen very often and when using a stack of Rubygems, you almost always get bitten by something.</p>
<p>My cause for complaint today is <a href="http://www.vagrantup.com">Vagrant</a> and <a href="http://community.opscode.com">Chef</a>, well specifically <a href="http://wiki.opscode.com/display/chef/Chef+Solo">Chef Solo</a>. Vagrant is fine, it does what you tell it to do, but for most use-cases Chef Solo is the right tool to use for provisioning your virtual server. The <a href="http://vagrantup.com/v1/docs/provisioners/chef_solo.html">Vagrant docs on Chef Solo</a> unfortunately fib you, they say you can use <a href="http://wiki.opscode.com/display/chef/Data+Bags">Data Bags</a> with Chef Solo, <a href="http://wiki.opscode.com/display/chef/Data+Bags#DataBags-UsingDataBagswithChefSolo">but by default you cannot</a>. </p>
<p>This is a big deal as many useful Chef recipes make heavy use of Data Bags. Data Bags which let you provide environment specific configuration for your provisioning is not yet supported by the stock Chef Gem (currently version 10.12.0). In order to make use of Data Bags with Chef Solo, you need version 10.14.0 and above. This means building the gem from source.</p>
<p>I use <a href="https://github.com/jedi4ever/veewee/">Veewee</a> to build my vagrant base boxes (you should too, it&#39;s awesome!), and you can edit the postinstall.sh file in your box definition folder to build Chef from source, rather than installing it directly via Rubygems.</p>
<script src="https://gist.github.com/3377779.js?file=chef-from-src.sh"></script>

<p>You can repeat this for your local dev machine, and now you can get Chef Solo cooking up your recipes and happily using data bags.</p>
]]></description><link>http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</link><guid isPermaLink="true">http://aaronbonner.io/post/29614785433/build-a-chef-gem-from-source</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[chef]]></category><category><![CDATA[chef solo]]></category><category><![CDATA[provisioning]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 17 Aug 2012 10:33:00 GMT</pubDate></item><item><title><![CDATA[Remove a Magento Adminhtml Menu Option]]></title><description><![CDATA[<p>If, for whatever reason, you need to remove an entry from the magento admin menu, you have two simple options. Remove it using css, or alternatively, drop the following into a custom module&#39;s adminhtml.xml.</p>
<script src="https://gist.github.com/3323947.js"></script>

<p>This overrides the core code pool&#39;s adminhtml definition, and puts a dependency on a non-existent module. Effectively, this disabled the menu item because it no longer meets the defined dependency requirements.</p>
<p>As always with any magento configuration / module changes, you may need to clear caches for this to take effect.</p>
]]></description><link>http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</link><guid isPermaLink="true">http://aaronbonner.io/post/29189661031/remove-a-magento-adminhtml-menu-option</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 11 Aug 2012 11:35:00 GMT</pubDate></item><item><title><![CDATA[Find and Delete Files Between Two Dates  ]]></title><description><![CDATA[<p>GNU Find never ceases to amaze me with its utility.</p>
<p>Yesterday I had to do an emergency restart of mysql in production and the resulting magento report/xxxx files swamped out everything else that I might have wanted to look at.</p>
<p>So specifically, I wanted to delete all the files that were created between a start and end date.</p>
<p>GNU find makes this easy</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -ls
</code></pre>
<p>This instructs find to list (-ls) all files (-type f) that are newer than a file called 
655958024588 (-newer) and not newer than  627812220533 (! -newer).</p>
<p>If you do not have two files to act as date range boundaries, you can use touch to create them.</p>
<pre><code>$ touch -t yyyymmddHHMM start_date_file
$ touch -t yyyymmddHHMM end_date_file
</code></pre>
<p>Then supply these file names to -newer and ! -newer. </p>
<p>To delete the files we can use -exec.</p>
<pre><code>$ find . -type f -newer 655958024588 ! -newer 627812220533 -exec rm -rf {} \;
</code></pre>
<p>Here it&#39;s the -exec argument does the heavy lifting. {} is a placeholder for the file name (find substitutes &#39;{}&#39; with each found filename) and ; terminates the command sequence (much like it does in regular bash).</p>
]]></description><link>http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</link><guid isPermaLink="true">http://aaronbonner.io/post/28969404367/find-and-delete-files-between-two-dates</guid><category><![CDATA[unix]]></category><category><![CDATA[find]]></category><category><![CDATA[gnu]]></category><category><![CDATA[linux]]></category><category><![CDATA[sysadmin]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Aug 2012 07:29:00 GMT</pubDate></item><item><title><![CDATA[veewee-templates-update - A Hidden Little Gem]]></title><description><![CDATA[<p><a href="https://github.com/jedi4ever/veewee/">Veewee</a> considerably simplifies the process of creating base distribution images for use with <a href="http://vagrantup.com">Vagrant</a>, but unfortunately you have to choose between using the easy to install gem (which comes with horribly out of date basebox templates), or install the latest version from source, which unfortunately uses rvm in a pretty repugnant way.</p>
<p>So, if you want to use veewee to setup a new amd64 Precise Pangolin basebox for vagrant, you either have to pull the latest veewee sources from github, or download the most recent releases templates and copy them over into your veewee gem folder. </p>
<p>This is where veewee-templates-update steps in, it automates that latter step (downloading and installing just the updated templates) for you.</p>
<p>Installation is simple:</p>
<pre><code>$ gem install veewee-templates-update
</code></pre>
<p>Then, just run the updater:</p>
<pre><code>$ veewee-templates-update
&gt; Veewee: /home/aaron/.rvm/gems/ruby-1.9.3-p194/gems/veewee-0.2.3
&gt; Downloading: https://github.com/jedi4ever/veewee/tarball/master
&gt; Extracting: CentOS-4.8-i386 CentOS-5.5-i386-netboot CentOS-5.5-x86_64-netboot 
&gt; ...

$ vagrant basebox define precise-amd64 ubuntu-12.04-server-amd64
&gt; ...
&gt; (profit)
</code></pre>
]]></description><link>http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</link><guid isPermaLink="true">http://aaronbonner.io/post/28933222730/veewee-templates-update-a-hidden-little-gem</guid><category><![CDATA[veewee]]></category><category><![CDATA[vagrant]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 07 Aug 2012 21:43:00 GMT</pubDate></item><item><title><![CDATA[Macbook Air - Everything Old is New]]></title><description><![CDATA[<p>I&#39;m just going to jot down my experience with my new Macbook Air. It will be rather more a stream of consciousness than structured prose, so my apologies in advance for that. I&#39;ll clean it all up later.</p>
<p>Anyway, quite happy with it so far, but like any geek with new kit, I want to know everything about it, and make it dance to my whim.</p>
<p>Two things learned this morning: if you have an old Magsafe (1) powerpack, which I did from my old 13&quot; Mac Pro, and it has higher or equally rated wattage  you can use it with your macbook air.</p>
<p>It makes sense, a high rated powersupply can support a lower power rated device, but not vice versa. So a 60 watt Macbook Pro magsafe can power a 45 watt Macbook Air. But a 60 Watt Magsafe can&#39;t power a 15&quot; 85 watt macbook pro.</p>
<p><em>The other thing</em> which I have discovered, is that the thunderbolt port on the newer Airs use the same socket form factor as the mini display port. This means is if you have an old Mini DisplayPort -&gt; DVI adapter lying around, you can reuse it.</p>
]]></description><link>http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</link><guid isPermaLink="true">http://aaronbonner.io/post/25636905517/macbook-air-everything-old-is-new</guid><category><![CDATA[apple]]></category><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[macbook]]></category><category><![CDATA[mba]]></category><category><![CDATA[air]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 22 Jun 2012 08:57:16 GMT</pubDate></item><item><title><![CDATA[Changing Your Shell in OSX]]></title><description><![CDATA[<p>So, back from the Linux jungle and sitting in front of a Macbook once again.</p>
<p>My first real job has been to get a decent unix environment up. OSX&#39;s BSD utilities don&#39;t really cut it. <a href="http://www.macports.org/">Macports</a> is far and away the best distribution out there.</p>
<p>Once you install coreutils and get ls, find etc it makes sense you will want to change your shell to a modern version of bash (or zsh if that&#39;s the way you roll).</p>
<p>Lion ships with bash 3.2 whereas Macports will give you a contemporary version 4.2 Unfortunately it&#39;s not as simple as going</p>
<pre><code>$ sudo port install bash
$ chsh 
  &lt;input /opt/local/bin/bash&gt;
</code></pre>
<p>I had to do this before, but I&#39;d forgotten there&#39;s a trick to changing your shell in OSX to a non-standard location. The file /etc/shells contains a list of valid shells chsh will permit. You need to edit (as root or via sudo) this file and add your macports shells. Once that&#39;s done  chsh will let you change no problem.</p>
]]></description><link>http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/25607101819/changing-your-shell-in-osx</guid><category><![CDATA[mac]]></category><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><category><![CDATA[unix]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 21 Jun 2012 23:28:00 GMT</pubDate></item><item><title><![CDATA[Format a Javascript Array Literal from MySQL Output]]></title><description><![CDATA[<p>A quick bit of shell-fu.</p>
<p>To take a column from a MySQL database and quickly output it ready formatted as a Javascript array literal (without any specific escaping) do:</p>
<p>echo &#39;SELECT column FROM table WHERE some_column = &quot;somevalue&quot;&#39; | mysql -uuser -ppass --silent yourdb | awk -v q=&quot;&#39;&quot; &#39;{ print q $0 q }&#39; | paste -s -d &#39;,&#39; | sed &#39;s/(.*)/[\1];/&#39;</p>
<p>The first part of the command is self explanatory, you pipe in a query to mysql, and ask it to give you raw unadorned output. It will return each row for column &#39;column&#39; from table &#39;table&#39; as a line of output.</p>
<p>You pipe it to awk and ask it to wrap the values in single quotes. Due to shell escaping with single quotes, you set the q variable to a single quote. Paste then joins all the output lines together separated by commas.</p>
<p>Finally I use sed to wrap the resulting output in Javascript array literal &#39;[&#39; and &#39;]&#39; symbols. Awk or any other tool to concatenation approach would do just fine here too.</p>
]]></description><link>http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/25497886117/format-a-javascript-array-literal-from-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[shell]]></category><category><![CDATA[commandline]]></category><category><![CDATA[awk]]></category><category><![CDATA[sed]]></category><category><![CDATA[bash]]></category><category><![CDATA[unix]]></category><category><![CDATA[linux]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 20 Jun 2012 10:31:12 GMT</pubDate></item><item><title><![CDATA[Date Range with Null Search in Solr 4.x]]></title><description><![CDATA[<p>In Solr to do query for a date range you use the syntax: </p>
<pre><code>field_name: [Start TO Finish] 
</code></pre>
<p>You can also use wildcards and specific constants in a logical way e.g:</p>
<pre><code>[NOW TO *] or [* TO *]
</code></pre>
<p>To search over documents that do not have a value for that date field, e.g. is NULL, you use the syntax:</p>
<pre><code>-field_name: [* TO *].
</code></pre>
<p>It is hard though, to search for dates that are EITHER NULL OR lie in a specific range.</p>
<p>It would seem logical to specify </p>
<pre><code>date_field:[Start To Finish] OR -date_field: [* TO *] 
</code></pre>
<p>Unfortunately Solr does not appear to support specifying a field multiple times in this way.</p>
<p>So the trick is to effectively query for everything you do not want, then negate the result. </p>
<p>This approach says select me anything that isn&#39;t in the range of this date and is not null. When you invert that result, you get all the documents sit inside the date range or are NULL.</p>
<p>The query to weave this magic is</p>
<pre><code>-(-date_field:[Start TO Finish] AND date_field:[* TO *])
</code></pre>
]]></description><link>http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</link><guid isPermaLink="true">http://aaronbonner.io/post/25428518232/date-range-with-null-search-in-solr-4x</guid><category><![CDATA[solr]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 19 Jun 2012 11:22:00 GMT</pubDate></item><item><title><![CDATA[Mage::getSingleton() considered harmful]]></title><description><![CDATA[<p>Magento makes use of design patterns, or at least an <em>interpretation</em> of design patterns. One particularly pernicious one, is Mage::getSingleton().</p>
<p>A <a href="http://c2.com/cgi/wiki?SingletonPattern">Singleton</a>, if you&#39;ve not heard the term before, was popularised in the <a href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612">Design Patterns</a> book by the Gang of Four (Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides). To be very succinct, a Singleton is a way to ensure there is only ever one instance of a class in an Object Oriented design. To put it in even simpler terms , it is an Object Oriented version of a global variable.</p>
<p>It&#39;s used <em>heavily</em> in Magento (in the app/code/core directory, 2261 times in fact!). But anyway, why is it considered harmful? There are a number of arguments for why and why not. Herb Sutter&#39;s <a href="http://www.drdobbs.com/184401625">Once is not enough</a> gives a pretty good (and fun to read) overview of them, or you can read <a href="http://www.object-oriented-security.org/lets-argue/singletons">Kenton Varda</a> who looks in-depth at the topic. I generally think though, that in Object Oriented software, you&#39;re seeking to create abstractions around complexity. The Singleton is a (too) convenient escape hatch from encapsulation and can lead to the attendant issues you get with global variables.</p>
<p>In the Magento/PHP land a more implementation specific problem with Singletons, is memory consumption. Today I was revisiting a Magento Promotions extension I had written and trying to figure out why it was suddenly obliterating PHP&#39;s memory_limit. </p>
<p>This extension basically piggybacks on the existing Promotions/Coupons system, but generates an index of products that match coupon codes, the price before and after the promotion is applied and some other metadata.</p>
<p>In order to determine what products have a promotion associated with them, I run through all the products, and match SalesRule conditions against them. I create a synthetic quote for the products that match and then pump them through the SalesRule validator. This effectively applies the promotion to the product and let us see what the savings are.</p>
<p>It&#39;s fairly basic, it doesn&#39;t look at multi product combinations but it works well enough for simple cases.</p>
<pre><code>/**
 * Apply pricing rules to a synthetic quote to calculate discounted price
 * 
 * @param string $couponCode
 * @param Mage_Catalog_Model_Product $product
 * @return  float
 */
public function applyToProduct($couponCode, $product)
{
    $quote = Mage::getModel(&#39;sales/quote&#39;);
    $item = Mage::getModel(&#39;sales/quote_item&#39;)
        -&gt;setQuote($quote)
        -&gt;setProduct($product)
        -&gt;setQty(1)
        -&gt;setBaseDiscountCalculationPrice($product-&gt;getPrice())
        -&gt;setDiscountCalculationPrice($product-&gt;getPrice());

    $validator = Mage::getSingleton(&#39;salesrule/validator&#39;)
        -&gt;init(1, 1, $couponCode);
    
    $validator-&gt;process($item);
    
    return $product-&gt;getPrice() - $item-&gt;getDiscountAmount();
}
</code></pre>
<p>Now when I wrote this code, it seemed sensible to use the validator as a Singleton, after all I only needed one copy of it. It didn&#39;t, at the time, seem to make sense to create and then destroy the validator a couple of thousand times during indexing. Indeed when this code was first deployed, everything ran smoothly.</p>
<p>Recently the user of this extension added a whole bunch of sales rules - and this caused that product/salesrule index loop to detonate. </p>
<p><em>That Singleton Validator, which was written as some sort of optimization, started happily hosing over a gig of ram.</em></p>
<p>Changing getSingleton() to getModel() took ram usage down from 1100MB to about 80MB.</p>
<p>My suspicion is that PHPs garbage collection wasn&#39;t cleaning up adequately after each validation attempt. As the validator is effectively static, it never gives up its references for PHP to clean up. When you use getModel(), the validator loses all its references after each loop. While it means it also has to be constructed after each loop, but that allows PHP to free the memory it is using.</p>
<p>The Singleton is already a controversial pattern these days, but Magento developers should be particularly wary of it&#39;s implementation and its scope to hose memory.</p>
]]></description><link>http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</link><guid isPermaLink="true">http://aaronbonner.io/post/24819898637/magegetsingleton-considered-harmful</guid><category><![CDATA[magento]]></category><category><![CDATA[singleton]]></category><category><![CDATA[optimization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 10 Jun 2012 16:36:00 GMT</pubDate></item><item><title><![CDATA[PHPStorm and Ubuntu Unity]]></title><description><![CDATA[<p>Out of the box, sadly, PHPStorm doesn&#39;t make nice with the Ubuntu Unity launcher.</p>
<p>Typically I manage PHPStorm by extracting it to /opt and then symlinking the extracted folder to /opt/PhpStorm. </p>
<p><img src="/images/tumblr_m3nd0p0yKN1qac7a1.png" alt=""></p>
<p>To create a nice launcher for Unity you create a desktop entry under ~/.local/share/applications</p>
<p>$ vim ~/.local/share/applications/jetbrains-phpstorm.desktop</p>
<p>Now paste the following in (adjusted for your own paths)</p>
<pre><code>[Desktop Entry]                                                                  
Version=4.0.1                                                                   
Type=Application                                                                
Name=JetBrains PhpStorm                                                         
Exec=/opt/PhpStorm/bin/phpstorm.sh %f                                           
Icon=/opt/PhpStorm/bin/webide.png                                               
Comment=Develop with pleasure!                                                  
Categories=Development;IDE;                                                     
Terminal=false                                                                  
StartupNotify=true                                                              
StartupWMClass=jetbrains-phpstorm           
</code></pre>
<p>Hit your windows or super key and type in &#39;Php&#39; - you should see the newly created desktop entry there. Once launched and fully started, you can opt to keep PHPStorm locked to the Unity launcher for easy startup.</p>
<p><img src="/images/tumblr_m3nd5gJ6pI1qac7a1.png" alt="">
 </p>
]]></description><link>http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</link><guid isPermaLink="true">http://aaronbonner.io/post/22580253608/phpstorm-and-ubuntu-unity</guid><category><![CDATA[unity]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[phpstorm]]></category><category><![CDATA[idea]]></category><category><![CDATA[jetbrains]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 May 2012 09:55:00 GMT</pubDate></item><item><title><![CDATA[Chroot into a Broken Linux Install]]></title><description><![CDATA[<p>For about eight years I ran Gentoo Linux before I eventually gave it up, and moved on to Ubuntu. It was remarkable in that it provided a BSD like ports system and let you compile your system from the ground up. It also tended to break, a lot.</p>
<p>Even today, almost all x86 linux distributions can run (in theory) on a 32bit 386 processor. Let&#39;s be clear though, while it is a remarkable (and successful) processor, the 386 is an antique. The selling point of Gentoo was that it was one of the few distributions to give you the power to build a system,  specifically for modern processors abandoning backwards compatibility for long-obsolete ones.</p>
<p>In the early days you had to start from what was called a (and I don&#39;t think they do it this way anymore) &#39;stage 1&#39; install. A stage 1 install is where all you have is a bootable livecd with a few basic set of tools: a c compiler, a shell and the basic gnu coreutils...and irssi. Enough utilities to allow you to build further C packages. You&#39;d run the command &#39;emerge system&#39; and it would go off and build gcc, glibc, coreutils etc. Once these were built you&#39;d then rebuild glibc, and gcc with your newly compiled, architecture specific, compiler and c library.</p>
<p>Anyway, this took a long time, particularly on sub gigahertz Pentium 2s and 3s, and Gentoo systems tended to break a lot, and by break, in the absolute best case, I mean merely became un-bootable.</p>
<p>The process to recover the system was pretty much the same as to install it, you had to boot from a livecd, configure your network card, hook up to the network, then chroot to the broken disk. At this point you could try and repair whatever damage you had caused.</p>
<p>These days if you do something silly, like I don&#39;t know, try and dist-upgrade from Ubuntu Oneiric to Precise, you can get that true Gentoo feeling (i.e. nothing works and you can&#39;t boot the machine).</p>
<p>This happened to me this afternoon and the hard yards done with Gentoo came to the rescue.</p>
<p>Here&#39;s how you do it.</p>
<p>Boot up from a livecd (or usbkey), get the network card modules loaded and get a dhcp address. With the Precise Live DVD you can do all of this pretty easily by selecting the  &#39;try without installing&#39; option from the bootloader.</p>
<p>Once you&#39;re online, you need to prepare the mount. First step is to mount the root partition somewhere, I typically use /mnt/<distroname> so say /mnt/oneiric (it can be whatever). If you&#39;re not sure of your partition numbers, your livecd will almost certainly come with fdisk with which you can push &#39;p&#39; to print the partition table.</p>
<pre><code>$ mount -t ext4 /dev/sda5 /mnt/ubuntu
</code></pre>
<p>If you have a separate boot partition mount that too.</p>
<pre><code>$ mount -t ext2 /dev/sda1 /mnt/ubuntu/boot
</code></pre>
<p>Now in order to have a functional chroot, we need the proc, dev and sys subsystems to be mounted onto the chroot. This is the tricky bit.</p>
<pre><code>$ mount -t proc none /mnt/ubuntu/proc
$ mount -o bind /dev /mnt/ubuntu/dev
$ mount -o bind /sys /mnt/ubuntu/sys
</code></pre>
<p>In the case of the sys and dev dirs, we need to reference the exact same mountpoints as the host so we use the -o bind option.</p>
<p>Last thing, we want to have functional network name resolution so we copy over the host&#39;s /etc/resolv.conf to /mnt/ubuntu/etc/resolv.conf</p>
<p>Now the chroot is ready</p>
<pre><code>$ chroot /mnt/ubuntu /bin/bash
$ source /etc/profile
</code></pre>
<p>The chroot will be pretty much as it would be if you&#39;d booted into it normally with a few exceptions. The kernel and kernel modules will be those of the host. If you need to access some specific hardware you need to set this up on the host. </p>
<p>My busted Precise install was fixed with a simple apt-get update and upgrade, before re-running the grub installer.</p>
]]></description><link>http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</link><guid isPermaLink="true">http://aaronbonner.io/post/21103731114/chroot-into-a-broken-linux-install</guid><category><![CDATA[linux]]></category><category><![CDATA[recovery]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[gentoo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 14 Apr 2012 20:58:00 GMT</pubDate></item><item><title><![CDATA[Magento Inventory Rebuild sets Grouped Products Out of Stock]]></title><description><![CDATA[<p>I came across a particularly nasty bug in Magento 1.6.2.0 last night where calling
Mage::getSingleton(&#39;cataloginventory/stock_status&#39;)-&gt;rebuild() would set all
grouped products to be out of stock. This didn&#39;t happen in 1.5 -however the cataloginventory status handling changed dramatically between 1.5 and 1.6</p>
<p>Forcing the cataloginventory_stock indexer to re-run fixes the situation but if you want to script the status update of many stock items, you can have a short period where your store&#39;s products will be unavailable.</p>
<p>Stepping through the issue I found myself in app/code/core/Mage/Catalog/Model/Resource/Product/Status.php and specifically the getProductStatusMethod()</p>
<pre><code>/**
 * Retrieve Product(s) status for store
 * Return array where key is a product_id, value - status
 *
 * @param array|int $productIds
 * @param int $storeId
 * @return array
 */
public function getProductStatus($productIds, $storeId = null)
{
   $statuses = array();

   $attribute      = $this-&gt;_getProductAttribute(&#39;status&#39;);
   $attributeTable = $attribute-&gt;getBackend()-&gt;getTable();
   $adapter        = $this-&gt;_getReadAdapter();

   if (!is_array($productIds)) {
       $productIds = array($productIds);
   }

   if ($storeId === null || $storeId == Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID) {
       $select = $adapter-&gt;select()
           -&gt;from($attributeTable, array(&#39;entity_id&#39;, &#39;value&#39;))
           -&gt;where(&#39;entity_id IN (?)&#39;, $productIds)
           -&gt;where(&#39;attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;store_id = ?&#39;, Mage_Catalog_Model_Abstract::DEFAULT_STORE_ID);

       $rows = $adapter-&gt;fetchPairs($select);
   } else {
       $valueCheckSql = $adapter-&gt;getCheckSql(&#39;t2.value_id &gt; 0&#39;, &#39;t2.value&#39;, &#39;t1.value&#39;);

       $select = $adapter-&gt;select()
           -&gt;from(
               array(&#39;t1&#39; =&gt; $attributeTable),
               array(&#39;value&#39; =&gt; $valueCheckSql))
           -&gt;joinLeft(
               array(&#39;t2&#39; =&gt; $attributeTable),
               &#39;t1.entity_id = t2.entity_id AND t1.attribute_id = t2.attribute_id AND t2.store_id = &#39; . (int)$storeId,
               array(&#39;t1.entity_id&#39;)
           )
           -&gt;where(&#39;t1.store_id = ?&#39;, Mage_Core_Model_App::ADMIN_STORE_ID)
           -&gt;where(&#39;t1.attribute_id = ?&#39;, $attribute-&gt;getAttributeId())
           -&gt;where(&#39;t1.entity_id IN(?)&#39;, $productIds);
       $rows = $adapter-&gt;fetchPairs($select);
   }

   foreach ($productIds as $productId) {
       if (isset($rows[$productId])) {
           $statuses[$productId] = $rows[$productId];
       } else {
           $statuses[$productId] = -1;
       }
   }

   return $statuses;
}
</code></pre>
<p>This method goes through a list of productIds, and will assign a status id to them, this is typically used on grouped products when determining if all their children stock items are out of stock. </p>
<p>In testing, the status ids were all coming back as -1, i.e. not valid, and so therefore the group was out of stock.</p>
<p>In my code the store id was neither null, nor the default store id, so execution fell through to the else branch. At first I inserted a print_r($select-&gt;assemble()) to see the SQL being generated. The SQL was fine and when pasting it into MySQL I got a bunch of valid looking results. Funnily though, the status column was first, and the product id column was second (unlike the if branch, where they were in reverse order). This presents a problem when we reach the fetchPairs() statement.</p>
<p>Zend DB&#39;s fetchPairs returns an associative array resultset where column a is the key, and column b is the value. Because the SQL was returning the status column first (i.e. the key value) the result set consisted of just 2 rows (for each unique status code). In order for this code to work as you would expect the entity id (product id) needs to be first in the result set, then it gets used as a key.</p>
<p>The fix is straight forward enough, replace</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>with</p>
<pre><code>$select = $adapter-&gt;select()
            -&gt;from(
                array(&#39;t1&#39; =&gt; $attributeTable),
                array(&#39;entity_id&#39;, &#39;value&#39; =&gt; $valueCheckSql))
</code></pre>
<p>This way the product id is always used as the key in fetchPairs and you get a status result for each product.</p>
]]></description><link>http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</link><guid isPermaLink="true">http://aaronbonner.io/post/20956645813/magento-inventory-rebuild-sets-grouped-products</guid><category><![CDATA[magento]]></category><category><![CDATA[bugs]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 12 Apr 2012 08:50:00 GMT</pubDate></item><item><title><![CDATA[Book Review: Growing Object-Oriented Software Guided by Tests]]></title><description><![CDATA[<p><img src="/images/tumblr_lz6bneXCrV1qac7a1.jpg" alt=""></p>
<ul>
<li><a href="http://www.growing-object-oriented-software.com/">http://www.growing-object-oriented-software.com/</a></li>
<li><a href="http://www.mockobjects.com">http://www.mockobjects.com</a></li>
</ul>
<p><em>The big idea is messaging.</em> <a href="http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/017019.html">Kay98</a>. It&#39;s a quote cited early on in Growing Object Oriented Software Guided by Tests (GOOS), a book that looks at Test Driven Development (TDD) using Mock Objects. This idea of messaging being central to Object Oriented Analysis and Design (OOAD), drives much of what is presented throughout the book.</p>
<p>I think as OOAD has matured over the past decade, the mode of thinking of classes as hierarchical constructs has lost favour. Increasingly OOAD is about managing the collaboration of a large number of small, independent, objects. In such designs the solution to the problem is achieved by the way the developer defines the software&#39;s object graph – i.e. its composition. In a design that is focussed on getting the composition of objects just so, communication between them is the important thing, much more so than classification.</p>
<p>In GOOS, TDD is presented as an exercise to first understand and then improve messaging protocols between objects. GOOS demonstrates that Mock Objects are an ideal tool to help discover these protocols . This is unusual, even in 2012, for a lot of developers. Typically I&#39;ve always used Mock Objects as a Test Stub or Double. A placeholder object to induce some specific behaviour I want to test, or to isolate the unit of code I&#39;m testing.  GOOS sees Mocks a different way, as a means of representing roles within a system. The authors say a Mock Object is not a Stub, but instead an interface to some behaviour or role. <a href="www.jmock.org/oopsla2004.pdf">Mock Roles not Objects</a>, a paper written by Freeman, Pryce, et al. way back in 2004,  explains this concept and sets out much of groundwork for GOOS.</p>
<p>GOOS itself is structured into five sections. The first couple of sections (very) briefly introduce the reader to TDD, the basic tenets of Agile development (very heavily influenced by Extreme Programming, XP), testing tools (JUnit, JMock, Hamcrest, Windowlicker) and the authors&#39; OOAD philosophy.</p>
<p>The overwhelming impression I get from Freeman and Pryce&#39;s introduction to TDD and OOAD is that they see writing tests as less an exercise in producing a regression catching suite, and more as a design exercise. By writing a failing test up front, you have an immediate, testable, statement of intent as to what the software will do, and just as importantly, what it wont do. Focussing on just a narrow slice of behaviour, as represented by a single test, helps narrow the scope of what needs to be done. The act of satisfying the test, making it pass, focuses the developer&#39;s mind on the domain of the problem and forces both the developer and the customer to think of what from the environment impacts the test. Flushing out dependencies, object peers and services early on is a good thing.</p>
<p>Too often in Big Design up Front approaches, you write a ton of code according to a pre-determined (but unchecked) idea of the environment. When you go to hook everything up at the end you can find that, to the horror of your project sponsor (and spouse who wont see you for at least the next week), that, actually nothing hooks up. Or worse, you have misinterpreted what the Sponsor wanted. In a Test Driven Design, if elements of the system are incompatible, you&#39;re alerted to it early. If you have gotten the requirements completely wrong the customer can see it straight away. TDD takes &#39;Fail Fast&#39; to its logical extreme.</p>
<p>The meat of the book is a worked example. The authors describe a fictional auction sniper system that connects to an auction, makes bids on items and either wins or loses. It&#39;s a simple example, but complicated enough to run into common issues developers face when developing OO software. Certainly there is little trouble filling out 150 pages as the authors work from an abstract set of stories to concrete code. What is good about their example is the way you see the  code transforming in stages as extra features are added. The writers are careful to explain the motivations for the transformations they make and they tie it back to the TDD and OOAD principles they introduce in the first two sections.</p>
<p>Having worked through the Auction Sniper application using Java, JUnit, JMock, Hamcrest and Windowlicker there&#39;s a brief recap and the book moves swiftly onto it&#39;s fourth and fifth sections.</p>
<p>The fourth section covers &#39;sustainable TDD&#39;, an important and increasingly relevant topic for many developers. The burden of maintaining poorly designed test suites is a drag on developer productivity. Rather than liberating developers to improve the structure of their code, bloated indecipherable test suites become a handbrake. GOOS goes through techniques to keep test suites effective, flexible and importantly - expressive. The concept, that software is about communication, is emphasised across the book. Tests are no different. Tests should express the developers intent and function as the rough and ready documentation of a unit of code. I found the advice around constructing data builders - techniques for creating test data for use in your test cases - particularly valuable.</p>
<p>The final section tackles the really hard stuff, dealing with persistence (and by extension any sort of frameworky data service), asynchrony and lastly concurrency. </p>
<p>I found GOOS easy to read and its chapters are of a length that can be easily read on the train/bus or in short bursts. Physically, the paper is of high quality and the typesetting clear, and easy to read. I really like the images and diagrams which are simple, authentic and aid what is being discussed. I felt like the introductory sections to TDD and particularly the author&#39;s OO philosophies were fairly succinct, perhaps too succinct. But as they state from the outset, this is not a book on TDD. And anyway, <a href="http://xunitpatterns.com/">Meszaros07</a> should be enough TDD for any human being. </p>
<p>GOOS comes with a particularly outstanding bibliography. Freeman and Pryce&#39;s academic backgrounds and broad reading are on display with the depth and quality of their references. As a starting point GOOS, while being a pretty domain specific (Mock Object) text, serves as a wonderful launching pad to further OOAD and TDD reading.</p>
<p>I like that the authors are people who practice what they preach. GOOS is a book for people who write real code in the real world. Sadly there seems  too many authors, &#39;consultants&#39; and coaches these days that talk a lot about programming and programming techniques yet seldom practice it in the wild. GOOS reads like a book borne out of brutal trench warfare with Objects. It is refreshing to read a book detailing a principled design philosophy and practice that has been tested in the dirty unwashed world of Enterprise</p>
<p>I&#39;ve slowly been pruning back my physical book collection and trying to maintain a library of what I would consider &#39;classics&#39;. The GOF book, Fowler&#39;s Refactoring and PoAEE Books, Beck&#39;s XP White Book, the Prag Prog Book, Kernighan and Richie&#39;s C Book, SICP and Knuth&#39;s The Art of Computer Programming series (and if I can ever get it, Mike Abrash&#39;s  Black Book). I think, for me, this book fits into that category. It represents a decade of work and thinking, neatly explained by two highly skilled and above all practical developers who were at the heart of it all.</p>
<p>I highly recommend this book to anyone actively practising TDD and also generally to anyone with an interest in Object Oriented Software Design and Practice. While GOOS is a &#39;Java book&#39; and day to day I program in PHP,  the principles, practices and overarching philosophy easily translate.</p>
<p>Follow Steve Freeman <a href="https://twitter.com/#!/sf105">@sf105</a> on Twitter, and read his blog <a href="http://www.higherorderlogic.com/">http://www.higherorderlogic.com/</a>. Nat Pryce is <a href="https://twitter.com/#!/natpryce">@natpryce</a> and blogs at <a href="http://www.natpryce.com/">http://www.natpryce.com/</a>. </p>
<p>Also be sure to check out the excellent mailing list <a href="http://groups.google.com/group/growing-object-oriented-software">http://groups.google.com/group/growing-object-oriented-software</a>.</p>
]]></description><link>http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</link><guid isPermaLink="true">http://aaronbonner.io/post/17368283968/book-review-growing-object-oriented-software</guid><category><![CDATA[goos]]></category><category><![CDATA[tdd]]></category><category><![CDATA[object oriented programming]]></category><category><![CDATA[architecture]]></category><category><![CDATA[xp]]></category><category><![CDATA[book review]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Feb 2012 11:55:00 GMT</pubDate></item><item><title><![CDATA[Enable Syntax Highlighting for Twig Templates in Vim]]></title><description><![CDATA[<p><a href="http://twig.sensiolabs.org/">Twig</a> is a PHP implementation of Jinja2, a python templating engine. Unfortunately there&#39;s no specific syntax highlighting support for .twig files in Vim. But that&#39;s no real problem as you can use the htmljinja syntax file provided here: <a href="http://www.vim.org/scripts/script.php?script_id=1856">http://www.vim.org/scripts/script.php?script_id=1856</a></p>
<p>To map it to twig, edit vimrc and add:</p>
<pre><code>au BufRead,BufNewFile *.twig set filetype=htmljinja
</code></pre>
]]></description><link>http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</link><guid isPermaLink="true">http://aaronbonner.io/post/16914552864/enable-syntax-highlighting-for-twig-templates-in</guid><category><![CDATA[php]]></category><category><![CDATA[twig]]></category><category><![CDATA[vim]]></category><category><![CDATA[jinja]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:38:21 GMT</pubDate></item><item><title><![CDATA[The difference between .bashrc and .bash_profile]]></title><description><![CDATA[<p>It can be confusing which file to put certain shell / environment setup information in.</p>
<p>Generally speaking (i.e. not with Mac OSX&#39;s Terminal.app) .bash_profile gets sourced only on login. Specifically this means only when you enter your username and password from the console. The .bashrc file is sourced when starting an interactive session, that is, whenever you open up a terminal.</p>
<p>There is some confusion here, when you open up a login shell, such as if you use the su - command or run an explicit login shell sometimes provided by a desktop environment. In these cases the rule applies, a login shell means .bash_profile is sourced first, then .bashrc.</p>
<p>I tend to put environment setup in bash_profile, things like paths and any specific one-time configuration settings that aren&#39;t likely to change very much. But it&#39;s quite reasonable to just put a source .bashrc in your .bash_profile and then put everything in .bashrc.</p>
]]></description><link>http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</link><guid isPermaLink="true">http://aaronbonner.io/post/16914428737/the-difference-between-bashrc-and-bashprofile</guid><category><![CDATA[bash]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 02 Feb 2012 09:29:00 GMT</pubDate></item><item><title><![CDATA[Including a Static CMS Block in a Magento Template]]></title><description><![CDATA[<p>Setup a static block in the admin CMS screens giving your block an identifier. You then use this identifier to declaratively load the block in your template.</p>
<p>Then, to include it in a template (say homepage.phtml):</p>
<pre><code>&lt;?php echo $this-&gt;getLayout()-&gt;createBlock(&#39;cms/block&#39;)-&gt;setBlockId(&#39;indentifer&#39;)-&gt;toHtml() ?&gt;
</code></pre>
]]></description><link>http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</link><guid isPermaLink="true">http://aaronbonner.io/post/16637623223/including-a-static-cms-block-in-a-magento-template</guid><category><![CDATA[magento]]></category><category><![CDATA[theming]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 28 Jan 2012 15:51:41 GMT</pubDate></item><item><title><![CDATA[Mocking a Fluent Interface with PHPUnit]]></title><description><![CDATA[<p>In PHPUnit it&#39;s quite possible to completely mock a class that employs a fluent interface without too much heavy lifting.</p>
<pre><code>  $mock = $this-&gt;getMock(&#39;Zend_Mail&#39;);
  $mock-&gt;expects($this-&gt;any())
            -&gt;method(new PHPUnit_Framework_Constraint_IsAnything())
            -&gt;will($this-&gt;returnSelf());
</code></pre>
<p>This has the effect of making any method call on your mock object return a reference to itself.</p>
]]></description><link>http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</link><guid isPermaLink="true">http://aaronbonner.io/post/16343646804/mocking-a-fluent-interface-with-phpunit</guid><category><![CDATA[phpunit]]></category><category><![CDATA[testing]]></category><category><![CDATA[tdd]]></category><category><![CDATA[mock objects]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 23 Jan 2012 10:17:00 GMT</pubDate></item><item><title><![CDATA[Installing Oracle Java 7 on Ubuntu Oneiric 11.10]]></title><description><![CDATA[<p>A good, fast, howto on installing an official Java 7 JVM / JRE on Ubuntu 11.10 (Oneiric).</p>
]]></description><link>http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</link><guid isPermaLink="true">http://aaronbonner.io/post/15340822818/installing-oracle-java-7-on-ubuntu-oneiric-1110</guid><category><![CDATA[java]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric]]></category><category><![CDATA[oracle]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 10:26:00 GMT</pubDate></item><item><title><![CDATA[John Carmack on Software Patents (1997)]]></title><description><![CDATA[<p>I saw a report today that <a href="http://arstechnica.com/gadgets/news/2012/01/google-buys-another-round-of-ibm-patents-as-oracle-trial-nears.ars">Google acquired a tranche of patents from IBM</a>. Presumably stocking its patent armoury to defend itself in it&#39;s various deathmatches with Oracle, Microsoft and Apple. With these increasingly insane patent wars taking place, <em>apparently</em> all in the name of safeguarding &#39;innovation&#39;, I was vaguely reminded of John Carmack commenting on the issue back in the day. I managed to find the quote, and it&#39;s well worth reading.</p>
<p>Are software patents really protecting innovation? Or are they really just protecting large businesses, increasing barriers to entry and reducing the ability for a small, innovative competitor to survive?</p>
<blockquote>
<p>It&#39;s something [software patents] that&#39;s really depressing because it&#39;s so horribly wrong for someone who&#39;s a creative engineering type to think about patenting a way of thinking. Even if you had something really, really clever, the idea that you&#39;re not going to allow someone else to follow that line of thought... All of science and technology is built standing on the shoulders of the people that come before you. Did Newton patent calculus and screw the hell out of the other guy working on it? It&#39;s just so wrong, but it&#39;s what the business world does to things, and certainly the world is controlled by business interests. And technical idealists are a minority.</p>
</blockquote>
<p><a href="http://www.team5150.com/~andrew/carmack/johnc_interview_1997_John_Carmack__The_Boot_Interview_Outtakes.html">Electric Playground interviews John Carmack, 1997</a></p>
<p><a href="http://arstechnica.com/old/content/2004/07/4048.ars">Carmack got bitten by patents himself</a> when he discovered a shadow drawing technique that somewhat ironically came to be known as <a href="http://en.wikipedia.org/wiki/Shadow_volume#Depth_fail">Carmack&#39;s Reverse</a>. While Carmack arrived at the technique independently, Creative had sought a patent for something very similar a few months earlier. The issue raised its head again this year with the Doom 3 source code release. In order to get legal approval to release the code, <a href="http://www.theverge.com/gaming/2011/11/17/2569394/john-carmack-doom-3-patent">Carmack had to change the implementation</a> to avoid infringing Creative&#39;s patent.</p>
<p>Patenting software really doesn&#39;t make sense. In programming there is usually an optimal and natural way of solving any given problem. By making optimal solutions exclusive, it reduces the efficiency with which people can create robust software. It&#39;s something that doesn&#39;t seem likely to encourage innovation to me. It seems much more likely to stifle it. And that&#39;s before we even consider the chilling effect that legal threats from gargantuan corporations can have on a would-be technologists.</p>
]]></description><link>http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</link><guid isPermaLink="true">http://aaronbonner.io/post/15318032968/john-carmack-on-software-patents-1997</guid><category><![CDATA[software]]></category><category><![CDATA[open source]]></category><category><![CDATA[patents]]></category><category><![CDATA[john carmack]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 05 Jan 2012 00:31:00 GMT</pubDate></item><item><title><![CDATA[Clean-up after Uninstalled Packages on Debian/Ubuntu]]></title><description><![CDATA[<p>When using <em>apt-get remove</em> or <em>aptitude remove</em> the configuration files of the packages uninstalled are not deleted. </p>
<p>Now you can manually do it yourself with <em>apt-get purge <package></em>, or, you can use the <em>dpkg</em> command to do it in a nice oneliner.</p>
<pre><code>dpkg --get-selections | grep deinstall | sed &#39;s/deinstall/\lpurge/&#39; | dpkg --set-selections; dpkg -Pa
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</link><guid isPermaLink="true">http://aaronbonner.io/post/15298297310/clean-up-after-uninstalled-packages-on</guid><category><![CDATA[debian]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[packages]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:25:07 GMT</pubDate></item><item><title><![CDATA[Quick VirtualBox CLI Cheatsheet]]></title><description><![CDATA[<p>I have been working with VirtualBox and vagrant a lot today and found myself having to double check the VBoxManage CLI syntax a number of times. So I&#39;m dumping a quick cheatsheet down to help remember it / help anyone else wanting to efficiently manage VBox VMs.</p>
<h2>Start a VM</h2>
<pre><code>VBoxManage startvm &quot;Name of VM&quot; --type headless|gui
</code></pre>
<h2>Forcibly Shutdown a VM</h2>
<pre><code>VBoxManage controlvm &quot;Name of VM&quot; poweroff
</code></pre>
<h2>List VMs</h2>
<pre><code>VBoxManage list vms
</code></pre>
<h2>Unregister/delete a VM</h2>
<pre><code>VBoxManage unregistervm &quot;Name of VM&quot; --delete
</code></pre>
<h2>Forward / Unforward Ports for SSH</h2>
<p>Modifyvm must be only be used when VM is powered off, otherwise controlvm for running VMs.</p>
<pre><code>VBoxManage modifyvm &quot;VM name&quot; --natpf1 &quot;guestssh,tcp,,2222,,22&quot;
VBoxManage modifyvm &quot;VM name&quot; --natpf1 delete &quot;guestssh&quot;
</code></pre>
]]></description><link>http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</link><guid isPermaLink="true">http://aaronbonner.io/post/15298110910/quick-virtualbox-cli-cheatsheet</guid><category><![CDATA[virtualbox]]></category><category><![CDATA[cloud]]></category><category><![CDATA[virtualization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 04 Jan 2012 16:19:01 GMT</pubDate></item><item><title><![CDATA[Skype, Ubuntu x86_64 and Tracing Shared Library Issues]]></title><description><![CDATA[<p>I don&#39;t know when it happened, but Skype has blown up on me sometime over the past two months.</p>
<pre><code>$ skype
&gt; skype: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>Hmmm.</p>
<pre><code>$ locate libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1
&gt; /usr/lib/x86_64-linux-gnu/libXss.so.1.0.0
</code></pre>
<p>Okay, so it&#39;s moaning about the X screensaver library, libxss, not being there. But it <em>is</em> there, although specifically, it&#39;s a 64bit library. I bet Skype isn&#39;t 64bit...</p>
<pre><code>$ file /usr/bin/skype
&gt; /usr/bin/skype: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
</code></pre>
<p>So that explains that then, we need compatible shared x86 libraries, but when you get these sorts of issues it&#39;s always best to see what else is missing. The <em>ldd</em> tool comes to the rescue.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf76f2000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf75df000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf75d9000)
&gt; libXss.so.1 =&gt; not found
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf75d0000)
&gt; libQtDBus.so.4 =&gt; not found
&gt; libQtGui.so.4 =&gt; not found
&gt; libQtNetwork.so.4 =&gt; not found
&gt; libQtCore.so.4 =&gt; not found
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf75b4000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf74c9000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf749e000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf7480000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf7306000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf7301000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf71cb000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf71b8000)
&gt; /lib/ld-linux.so.2 (0xf76f3000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf7198000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf7194000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf718d000)
</code></pre>
<p>So looking at this, I do not have both compatible libxss and misc qt libraries installed.</p>
<p>With Ubuntu, to enable multiarch support, check the file <em>/etc/dpkg/dpkg.cfg.d/multiarch</em>. There should be a line like below (and if there isn&#39;t, add it):</p>
<pre><code># in file /etc/dpkg/dpkg.cfg.d/multiarch
foreign-architecture i386
</code></pre>
<p>Now we need to satisfy Skype&#39;s i386 dependencies.</p>
<pre><code> $ sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libqtgui4:i386
</code></pre>
<p>If we re-run ldd we can see Skype&#39;s shared library dependencies are now satisfied.</p>
<pre><code>$ ldd /usr/bin/skype
&gt; linux-gate.so.1 =&gt;  (0xf7749000)
&gt; libasound.so.2 =&gt; /usr/lib32/libasound.so.2 (0xf7634000)
&gt; libXv.so.1 =&gt; /usr/lib32/libXv.so.1 (0xf762e000)
&gt; libXss.so.1 =&gt; /usr/lib/i386-linux-gnu/libXss.so.1 (0xf7629000)
&gt; librt.so.1 =&gt; /lib32/librt.so.1 (0xf7620000)
&gt; libQtDBus.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtDBus.so.4 (0xf75a6000)
&gt; libQtGui.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtGui.so.4 (0xf6ae0000)
&gt; libQtNetwork.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtNetwork.so.4 (0xf69a4000)
&gt; libQtCore.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtCore.so.4 (0xf6702000)
&gt; libpthread.so.0 =&gt; /lib32/libpthread.so.0 (0xf66e7000)
&gt; libstdc++.so.6 =&gt; /usr/lib32/libstdc++.so.6 (0xf65fc000)
&gt; libm.so.6 =&gt; /lib32/libm.so.6 (0xf65d2000)
&gt; libgcc_s.so.1 =&gt; /usr/lib32/libgcc_s.so.1 (0xf65b4000)
&gt; libc.so.6 =&gt; /lib32/libc.so.6 (0xf6439000)
&gt; libdl.so.2 =&gt; /lib32/libdl.so.2 (0xf6434000)
&gt; libX11.so.6 =&gt; /usr/lib32/libX11.so.6 (0xf62fe000)
&gt; libXext.so.6 =&gt; /usr/lib32/libXext.so.6 (0xf62eb000)
&gt; libQtXml.so.4 =&gt; /usr/lib/i386-linux-gnu/libQtXml.so.4 (0xf62aa000)
&gt; libdbus-1.so.3 =&gt; /lib32/libdbus-1.so.3 (0xf6261000)
&gt; libfontconfig.so.1 =&gt; /usr/lib32/libfontconfig.so.1 (0xf622b000)
&gt; libaudio.so.2 =&gt; /usr/lib32/libaudio.so.2 (0xf6211000)
&gt; libglib-2.0.so.0 =&gt; /lib32/libglib-2.0.so.0 (0xf6118000)
&gt; libpng12.so.0 =&gt; /lib32/libpng12.so.0 (0xf60ee000)
&gt; libz.so.1 =&gt; /usr/lib32/libz.so.1 (0xf60d9000)
&gt; libfreetype.so.6 =&gt; /usr/lib32/libfreetype.so.6 (0xf6041000)
&gt; libgobject-2.0.so.0 =&gt; /usr/lib32/libgobject-2.0.so.0 (0xf5ff2000)
&gt; libSM.so.6 =&gt; /usr/lib32/libSM.so.6 (0xf5fe9000)
&gt; libICE.so.6 =&gt; /usr/lib32/libICE.so.6 (0xf5fcf000)
&gt; libXi.so.6 =&gt; /usr/lib32/libXi.so.6 (0xf5fbf000)
&gt; libXrender.so.1 =&gt; /usr/lib32/libXrender.so.1 (0xf5fb4000)
&gt; libgthread-2.0.so.0 =&gt; /usr/lib32/libgthread-2.0.so.0 (0xf5fad000)
&gt; /lib/ld-linux.so.2 (0xf774a000)
&gt; libxcb.so.1 =&gt; /usr/lib32/libxcb.so.1 (0xf5f8e000)
&gt; libexpat.so.1 =&gt; /lib32/libexpat.so.1 (0xf5f64000)
&gt; libXt.so.6 =&gt; /usr/lib32/libXt.so.6 (0xf5f08000)
&gt; libXau.so.6 =&gt; /usr/lib32/libXau.so.6 (0xf5f04000)
&gt; libpcre.so.3 =&gt; /lib32/libpcre.so.3 (0xf5ec4000)
&gt; libffi.so.6 =&gt; /usr/lib32/libffi.so.6 (0xf5ebd000)
&gt; libuuid.so.1 =&gt; /lib32/libuuid.so.1 (0xf5eb7000)
&gt; libXdmcp.so.6 =&gt; /usr/lib32/libXdmcp.so.6 (0xf5eb0000)
</code></pre>
<p>Skype now starts up and behaves as I would expect.</p>
]]></description><link>http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</link><guid isPermaLink="true">http://aaronbonner.io/post/15236197241/skype-ubuntu-x8664-and-tracing-shared-library</guid><category><![CDATA[linux]]></category><category><![CDATA[skype]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 Jan 2012 10:32:00 GMT</pubDate></item><item><title><![CDATA[Cross compiling to 32bit with GCC]]></title><description><![CDATA[<p>Normally if you want to cross compile something to produce a 32bit executable with gcc, you have to pass in the -m32 argument.</p>
<pre><code>$ gcc -m32 -o test32 test.c
</code></pre>
<p>After a new install of Ubuntu 11.10 I was left scratching my head for a little bit with the following error:</p>
<pre><code>&gt; In file included from /usr/include/stdio.h:28:0,
&gt;             from test.c:1:
&gt; /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory
&gt; compilation terminated.
</code></pre>
<p>In order to build 32bit executables you need to install the i386 libc dev package.</p>
<pre><code>$ sudo apt-get install libc6-dev-i386
</code></pre>
<p>For a little bit of &#39;fun&#39; if you pass the -S -masm=intel arguments to gcc, you can compile once with -m32, and once with -m64 to see how your program changes based on target architecture. It can (perhaps it&#39;s just me :)) be Interesting to see the subtle changes in programs based on how they are assembled for x86_64 vs x86.</p>
<pre><code>$ gcc -m32 -S -masm=intel test.c
</code></pre>
]]></description><link>http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</link><guid isPermaLink="true">http://aaronbonner.io/post/14969163463/cross-compiling-to-32bit-with-gcc</guid><category><![CDATA[gcc]]></category><category><![CDATA[assembler]]></category><category><![CDATA[32bit]]></category><category><![CDATA[64bit]]></category><category><![CDATA[c]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Dec 2011 10:29:00 GMT</pubDate></item><item><title><![CDATA[Can't find PHPUnit/Extensions/Database/TestCase.php ?]]></title><description><![CDATA[<p>If you have a slightly dated PHPUnit test-case suite, and like me, have recently reinstalled your OS you will likely be running a modern version of PHPUnit (3.5/3.6).</p>
<p>Keeping up with changes to PHPUnit&#39;s suite of extensions can be occasionally dizzying, things have been put in and then ripped out of the core quite frequently in the last few releases. A common problem I&#39;m seeing lately upgrading my test-suites to be 3.6 compatible, is the extraction of database testing classes to their own package on the pear.phpunit.de channel.</p>
<p>If you see an error like this:</p>
<pre><code>&gt; include(PHPUnit/Extensions/Database/TestCase.php): failed to open stream: No such file or directory
</code></pre>
<p>Odds are you need to do this:</p>
<pre><code>$ sudo pear config-set auto_discover 1
$ sudo pear install --alldeps pear.phpunit.de/DbUnit
</code></pre>
]]></description><link>http://aaronbonner.io/post/14808439998/cant-find</link><guid isPermaLink="true">http://aaronbonner.io/post/14808439998/cant-find</guid><category><![CDATA[PHPUnit]]></category><category><![CDATA[tdd]]></category><category><![CDATA[php]]></category><category><![CDATA[quality assurance]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Dec 2011 12:49:00 GMT</pubDate></item><item><title><![CDATA[Writing Pipe Friendly Unix Programs in Ruby]]></title><description><![CDATA[<p>A great concise tutorial on how to write pipe aware programs for the unix shell in Ruby. </p>
]]></description><link>http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/14223587021/writing-pipe-friendly-unix-programs-in-ruby</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Dec 2011 19:18:53 GMT</pubDate></item><item><title><![CDATA[Slow Sales Order Address Lookups in Magento]]></title><description><![CDATA[<p>I&#39;m seldom surprised by some of the horrors under the Magento hood, but today&#39;s little gem takes some beating.</p>
<p>On a setup I administer, there are over 200,000 address records. When you view an order in the backend, and click &#39;edit address&#39;, the server grinds away and eventually dies either because it hits the max_execution_time limit or runs out of RAM.</p>
<p>You might see an otherwise meaningless error like this:</p>
<pre><code>Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 79 bytes) in /home/somewhere/public_html/lib/Zend/Db/Statement/Pdo.php on line 290
</code></pre>
<p>The cause for this, is the strange manner in which Magento searches for addresses in app/code/core/Mage/Adminhtml/controllers/Sales/OrderController.</p>
<pre><code>/**
 * Edit order address form
 */
public function addressAction()
{
    $addressId = $this-&gt;getRequest()-&gt;getParam(&#39;address_id&#39;);
    $address = Mage::getModel(&#39;sales/order_address&#39;)
        -&gt;getCollection()
        -&gt;getItemById($addressId);
    if ($address) {
        Mage::register(&#39;order_address&#39;, $address);
        $this-&gt;loadLayout();
        $this-&gt;renderLayout();
    } else {
        $this-&gt;_redirect(&#39;*/*/&#39;);
    }
}
</code></pre>
<p>The problem is in the $address-&gt;getCollection()-&gt;getItemById() chain. Magento creates and fully loads a collection of Address objects (which when you have 200k of them, takes a while). The final call in the chain, getItemById() takes the collection, iterates over it assigning each address to an array keyed by its entityId. It then returns any value which matches $addressId.</p>
<p>Now, there&#39;s another, really simple way to do the same thing. It doesn&#39;t involve instantiating 200,000 address objects, or iterating over them, or even using associative arrays. It&#39;s very familiar.</p>
<pre><code>$address = Mage::getModel(&#39;sales/order_address&#39;)-&gt;load($addressId);
</code></pre>
<p>This one line does the same thing far more efficiently. Now, the thing that worries me, is I can&#39;t see any reason why they aren&#39;t doing this already? The change in code works, nothing appears to break and the speed boost is (obviously) immense.</p>
<p>So why is it not done this way?</p>
]]></description><link>http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/14142637033/slow-sales-order-address-lookups-in-magento</guid><category><![CDATA[magento]]></category><category><![CDATA[wtf]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 13 Dec 2011 01:04:00 GMT</pubDate></item><item><title><![CDATA[Testing Varnish VCL Syntax]]></title><description><![CDATA[<p>Like with Apache, with Varnish 3 you can test the syntactic correctness of your VCL files without having to roll some dice restarting a running server.</p>
<pre><code>$ varnishd -C -f /path/to/mysetup.vcl
&gt; ...
</code></pre>
<p>Varnish will compile the file and print out it&#39;s full configuration as output. If there&#39;s an error, varnish handily gives you a compiler error to help trace what went wrong, which could look something like this</p>
<pre><code>&gt; Message from VCC-compiler:
&gt; Expected an action, &#39;if&#39;, &#39;{&#39; or &#39;}&#39;
&gt; (&#39;input&#39; Line 82 Pos 6)
&gt;     vcl_hash(req.http.Cookie);
&gt; -----########------------------
&gt;
&gt; Running VCC-compiler failed, exit 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</link><guid isPermaLink="true">http://aaronbonner.io/post/14125553826/testing-varnish-vcl-syntax</guid><category><![CDATA[varnish]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[webhosting]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Dec 2011 19:15:00 GMT</pubDate></item><item><title><![CDATA[Making Nice with Ubuntu Oneiric Ocelot]]></title><description><![CDATA[<p>I&#39;ve been using Oneiric since its release and in many ways it is a giant leap forward from Natty Narwhal. Unity is still horrible mind, but a lot of the rough edges (multiple / external monitor support) have been softened and the overall package is more stable.</p>
<p>Some things still grate, particularly the global menus and the overlay scrollbar, but they are manageable.</p>
<p>A great resource I think to checkout for anyone using Oneiric is <a href="http://www.webupd8.org">webupd8</a>. You can find a lot of tips to bend Oneiric to your will. </p>
<p>For me, that means getting an official JVM installed <a href="http://www.webupd8.org/2011/09/how-to-install-oracle-java-7-jdk-in.html">Installing Oracle Java 7</a> and getting a sane desktop environment configured with global menu and overlay scrollbars zapped away <a href="http://www.webupd8.org/2011/10/things-to-tweak-after-installing-ubuntu.html">Things to Tweak After Installing Oneiric</a>.</p>
<p>Linux Mint (now on release 12, Lisa) is increasingly becoming a viable alternative to Ubuntu. I&#39;ve played around with it a fair bit, but I still think it is not quite there yet in terms of &#39;just working&#39; when compared with Ubuntu.</p>
<p>For someone that needs to get work done with a minimum of fuss, Ubuntu is still the leader. I do hope though, someone can work out how to marry usability and power in a slightly better way than I feel Canonical are managing at the moment.</p>
]]></description><link>http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</link><guid isPermaLink="true">http://aaronbonner.io/post/14008289362/making-nice-with-ubuntu-oneiric-ocelot</guid><category><![CDATA[linux]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[oneiric ocelot]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 10 Dec 2011 09:22:00 GMT</pubDate></item><item><title><![CDATA[Quick Commandline Image Manipulation with Imagemagick]]></title><description><![CDATA[<p>I first came across the <a href="http://www.imagemagick.org/script/command-line-processing.php">Imagemagick</a> library and toolkit when I was new to Linux and trying to satisfy <a href="http://www.enlightenment.org">Enlightenment</a> and <a href="http://www.Eterm.org">E-term</a> dependencies and get them up and running. A task, and this was the mid 90s, I do not recall fondly. (And thinking about that experience, despite all the controversy surrounding Gnome 3, and Unity, the Linux Desktop has come a <em>long</em> way).</p>
<p>Anyway, unlike either Enlightenment or E-Term, Imagemagick has been a firm friend ever since.</p>
<p>There are three main tools I find I fall back on, time and time again. </p>
<p><a href="http://www.imagemagick.org/script/display.php">Display</a>, <a href="http://www.imagemagick.org/script/convert.php">Convert</a>, and <a href="http://www.imagemagick.org/script/identify.php">Identify</a>.</p>
<p>Display, funnily enough, will open up an x-window with the contents of an image.</p>
<p>Convert, perhaps the most useful tool of them all, lets you resize, transform, and re-format images.</p>
<p>And lastly, identify, lets you get information about a file, size, format, colour depth, offset and so forth.</p>
<p>The power of these tools is best wielded with simple loops in bash, or using GNU Find. For example to convert a bunch of images from 1920x1080 to say 1280x720, you could do something like this</p>
<pre><code>$ for IMAGE in *.jpg; do
&gt;   # usage convert &lt;action&gt; orgfile.jpg newfile.jpg
&gt;   convert -resize &#39;1280x720&#39; $IMAGE $(echo $IMAGE | sed &#39;s/.jpg$/-resized.jpg/&#39;)
&gt; done
</code></pre>
<p>Convert tries to be clever and maintain aspect ratios. In this case, 1920x1080 fits into 1280x720 maintaning a 16:9 aspect ratio. But what if we had a 1920x1200 input image? In these cases, convert would actually resize the image to something like 1280x800. But if we <em>really</em> want it to ignore common sense and squish things down to our requested 1280x720 you need to use the bang operator e.g.</p>
<pre><code>$ convert -resize &#39;1280x720!&#39; srcimg.jpg dstimg.jpg
</code></pre>
]]></description><link>http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</link><guid isPermaLink="true">http://aaronbonner.io/post/13916747022/quick-commandline-image-manipulation-with</guid><category><![CDATA[images]]></category><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[commandline]]></category><category><![CDATA[imagemagick]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 08 Dec 2011 10:46:00 GMT</pubDate></item><item><title><![CDATA[Deleting a Remote Branch with Git]]></title><description><![CDATA[<p>I am just going to give a quick example on how to this.</p>
<pre><code>$ git push origin :REMOTE_BRANCH_NAME
</code></pre>
<p>For a concrete example of this in action</p>
<pre><code>$ git branch -r
&gt; origin/develop
&gt; origin/master

$ git push origin :develop
&gt; To /home/aaron/development/atestrepo.git
  - [deleted]         develop

$ git branch -r
&gt; origin/master
</code></pre>
<p>So, why does this work, and why is the syntax so odd?</p>
<p>Well if we look at the push command in more detail it starts to make some sense.</p>
<p>If we want to push a local branch to a remote, we would do it like this</p>
<pre><code>$ git push origin mybranch
</code></pre>
<p>Here, origin is the remote, and mybranch the local branch to push up. The result of this command is we now have a remote branch origin/mybranch. But what if we wanted to call the remote branch something else? We would do that like this:</p>
<pre><code>$ git push origin mybranch:adiffnamefortheremotebranch
</code></pre>
<p>This syntax effectively translates to push to a branch adiffnameforremotebranch at remote origin the contents of mybranch. Now, can you see where this is going with respect to our delete? Deleting a remote branch with just a leading : (and no local branch name) is basically saying push nothing into remote branch called someremotebranch. Git takes this to mean delete the remote branch.</p>
<pre><code>$ git push origin :someremotebranch
</code></pre>
<p>I find thinking of it in these terms, makes it easier to remember the syntax.</p>
<p>It&#39;s also worth reading the <a href="http://progit.org/book/ch3-5.html">Pro Git Chapter on remotes</a> which  also covers (although, sadly, too briefly) deleting remote branches. </p>
]]></description><link>http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</link><guid isPermaLink="true">http://aaronbonner.io/post/12236546696/deleting-a-remote-branch-with-git</guid><category><![CDATA[git]]></category><category><![CDATA[teams]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Nov 2011 10:31:00 GMT</pubDate></item><item><title><![CDATA[Batch Convert to UTF-8 with Iconv and Sponge]]></title><description><![CDATA[<p>Sponge, part of the <a href="http://kitenet.net/~joey/code/moreutils/">Moreutils</a> package, is a neat perl script which takes data from stdin and writes it to a file. This in itself this is not remarkable, after all you can use the &#39;&gt;&#39; operator to do it. But what is different, is how Sponge waits until the end-of-file character (EOF) before opening and writing to an output file. I.e. it soaks up all the input data before commencing writing.</p>
<p>This is very, very handy if you want to do in-place substitutions with something like sed or in this case, the GNU text encoding conversion utility <a href="http://www.gnu.org/s/libiconv/documentation/libiconv/iconv.1.html">iconv</a>.</p>
<p>Typically to convert between two encodings, you call iconv like this:</p>
<pre><code>$ iconv -f cp1252 -t utf-8 myfile.txt
</code></pre>
<p>This will convert myfile.txt from windows latin 1, to utf-8 and dump the results to stdout.(Don&#39;t make the mistake of thinking iso-8859-1 and cp1252 are equivalent. It is safe to convert iso-8859-1 content as if it&#39;s cp1252, but the reverse is <strong>NOT</strong> true).</p>
<p>If you&#39;re in any doubt as to the encoding of the source file, you can inspect it with the &#39;file&#39; command.</p>
<p>So if we have a directory of say c source files we want to convert, we can make use of bash, iconv and sponge to save us the tedium of converting each file manually to a new copy, then replacing the original file with the copy.</p>
<pre><code>$ for FILE in *.c *.h; do iconv -f cp1252 -t utf-8 &quot;$FILE&quot; | sponge &quot;$FILE&quot;
</code></pre>
<p>Each file is filtered through iconv, the outputs of which are piped into sponge. Sponge soaks up the standard input until the EOF, then writes it to the original file.</p>
]]></description><link>http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</link><guid isPermaLink="true">http://aaronbonner.io/post/11729740923/batch-convert-to-utf-8-with-iconv-and-sponge</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 Oct 2011 12:01:00 GMT</pubDate></item><item><title><![CDATA[Download and Extract Oneliner with Bash]]></title><description><![CDATA[<p><a href="http://www.gnu.org/s/bash/">GNU Bash</a>. Most *nix developers use it every day and if they are like me, they must never cease to be amazed at the cool stuff you can do with it.</p>
<p>A simple thing today I tried out today, downloading and extracting a web-based tarfile in a one liner.</p>
<p>It&#39;s <em>really</em> simple.</p>
<pre><code>$ tar zxv &lt; &lt;(wget -q -O - http://www.somewhere.com/atarfile.tar.gz)
&gt; atarfile/
&gt; atarfile/file.txt
&gt; ...
</code></pre>
<p>So let&#39;s look at this statement in a little detail. Basically we evaluate the right side of the expression first. We&#39;re using wget to download, quietly (-q), a tgz file from somewhere on the internet and output the file&#39;s contents to stdout (-O -).</p>
<p>We use Bash&#39;s <a href="http://www.gnu.org/s/bash/manual/bash.html#Process-Substitution">Process Substitution</a> operator &#39;&lt;(&#39; to create a <a href="http://en.wikipedia.org/wiki/Named_pipe">Named Pipe</a>, which effectively creates a temporary file descriptor, and then direct the contents of that descriptor into tar using the &#39;&lt;&#39; file redirection operator.</p>
<p>Sounds complicated but looks simple.</p>
]]></description><link>http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/11437124117/download-and-extract-oneliner-with-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 15:20:00 GMT</pubDate></item><item><title><![CDATA[Disabling Modules in Drupal with Drush]]></title><description><![CDATA[<p>Upgrading a real-world Drupal 6 install to Drupal 7 is a chore. One of the more annoying aspects of the <a href="http://drupal.org/node/570162">core upgrade process</a> is step 5, &#39;disabling non-core modules&#39;.</p>
<p>The UI for disabling modules is not nice, particularly once module dependencies are thrown into the mix. It&#39;s not a case of simply boring through the list of modules unchecking everything. First you need to uncheck everything you can. Save changes. Then go through the list again disabling the previously greyed out modules (because they still had active dependants) and disabling those. It can take about 4-5 passes before you&#39;re done.</p>
<p>Thankfully there is a much easier way using <a href="http://www.drush.ws">Drush</a>. Drush is a command-line utility for managing aspects of a Drupal installation. Pertinently for Drupal upgraders it provides the ability to:</p>
<ul>
<li>List installed modules</li>
<li>Disable modules</li>
</ul>
<p>Running Drush without any arguments provides a list of commands and options. The two commands we are interested in are &#39;pm-list&#39; (list installed modules) and  &#39;pm-disable&#39; (disable a module).</p>
<p>For these commands to work, you must issue your calls to drush from within your Drupal site&#39;s install directory e.g. /home/user/public_html/drupal</p>
<p>Keep in mind you can get information on any of Drush&#39;s commands by issuing a call to the help command.</p>
<pre><code>$ drush help pm-list
&gt; Show a list of available extensions (modules and themes).
&gt; 
&gt; Options:
&gt;  --type                                    Filter by extension type. Choices:   
&gt;                                            module, theme.                       
&gt;  --status                                  Filter by extension status. Choices: 
&gt;                                            enabled, disable and/or &#39;not         
&gt;                                            installed&#39;. You can use multiple     
&gt;                                            comma separated values. (i.e.        
&gt;                                            --status=&quot;disabled,not installed&quot;).  
&gt;  --package                                 Filter by project packages. You can  
&gt;                                            use multiple comma separated values. 
&gt;                                            (i.e. --package=&quot;Core -              
&gt;                                            required,Other&quot;).                    
&gt;  --core                                    Filter out extensions that are not   
&gt;                                            in drupal core.                      
&gt;  --no-core                                 Filter out extensions that are       
&gt;                                            provided by drupal core.             
&gt;  --pipe                                    Returns a space delimited list of    
&gt;                                            the names of the resulting           
&gt;                                            extensions.                          
&gt; 
&gt; 
&gt; Aliases: pml
</code></pre>
<p>We want a list of non-core modules and we would prefer non-formatted output so our command call looks like this:</p>
<pre><code>$ drush pm-list --type=module --no-core --pipe
&gt; ad
&gt; ad_channel
&gt; click_filter
&gt; ad_embed
&gt; ad_cache_file
&gt; ad_notify
&gt; ad_owners
&gt; ad_report
&gt; ...
</code></pre>
<p>Now can produce a list of non-core modules to pass to the disable module Drush command.</p>
<pre><code>$ drush help pm-disable
&gt; Disable one or more extensions (modules or themes). Disable dependant extensions as well.
&gt;
&gt; Arguments:
&gt;  extensions                                A list of modules or themes. You can 
&gt;                                           use the * wildcard at the end of     
&gt;                                           extension names to disable multiple  
&gt;                                           matches.                             
&gt;
&gt;
&gt; Aliases: dis
</code></pre>
<p>Simple then. Putting it all together we can chain together the module list command output using <a href="http://tldp.org/LDP/abs/html/commandsub.html">shell command substitution</a> and pass that output as arguments to the disable command, all in one neat line.</p>
<pre><code>$ drush pm-disable `drush pm-list --no-core --type=module --pipe`
</code></pre>
<p>Press &#39;y&#39; in response to the resulting confirmation prompt and the tedious work in the Drupal admin UI is replaced with a neat one-line shell command!</p>
]]></description><link>http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</link><guid isPermaLink="true">http://aaronbonner.io/post/11435858035/disabling-modules-in-drupal-with-drush</guid><category><![CDATA[drupal]]></category><category><![CDATA[drush]]></category><category><![CDATA[cms]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Oct 2011 14:25:00 GMT</pubDate></item><item><title><![CDATA[Git - Creating a new Remote Branch]]></title><description><![CDATA[<p>So, Git really is a kernel hacker&#39;s versioning system. It&#39;s powerful, cool and with not inconsiderable barriers to entry for newcomers. You have to remember a lot of command-line syntax to use it effectively.</p>
<p>Branching and merging on a local repository is trivially simple, but what if you create a local branch on a repository that otherwise syncs up with a remote repository?</p>
<p>Follow these steps (this assumes a remote called origin is set up)</p>
<pre><code>$ git branch my-new-feature
$ ... # hack away
$ git commit -a -m &#39;Initial feature commit&#39;
$ git push origin my-new-feature
$ git branch --set-upstream my-new-feature origin/my-new-feature
</code></pre>
<p>Git pull/fetch/push will now track the newly created remote branch called &#39;my-new-feature&#39;.</p>
]]></description><link>http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</link><guid isPermaLink="true">http://aaronbonner.io/post/9291458554/git-creating-a-new-remote-branch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Aug 2011 13:17:35 GMT</pubDate></item><item><title><![CDATA[Add a User to a Supplementary Group]]></title><description><![CDATA[<p>If you want to add a unix user to a supplementary group (say for example user &#39;aaron&#39; belongs to group &#39;aaron&#39; but I want to add him to the &#39;wheel&#39; group as well) you use the usermod command.</p>
<pre><code>$ usermod -a -G wheel aaron
</code></pre>
<p>The -a argument is <em>very</em> important, it ensures arguments to -G <em>append</em> to the existing list of groups. Otherwise existing groups will be replaced with the argument supplied.</p>
]]></description><link>http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</link><guid isPermaLink="true">http://aaronbonner.io/post/8734886337/add-a-user-to-a-supplementary-group</guid><category><![CDATA[linux]]></category><category><![CDATA[unix]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 10 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[DocBlox the Speedy Alternative to PHPDocumentor]]></title><description><![CDATA[<p>Running a continuous integration process where you generate all your build artefacts on every build will push you to look for speed optimisations.</p>
<p>With an application I am currently configuring for Jenkins, it was taking about 30-35 minutes to generate all the API documentation through <a href="http://www.phpdoc.org/">PHPDocumentor</a>. I had heard good things about <a href="http://www.docblox-project.org/">DocBlox</a> - must notably its speed - and it&#39;s used by the Zend Framework so I thought I&#39;d sub it in for PHPDocumentor and see if it was quicker.</p>
<h2>Installation</h2>
<p>DocBlox has a number of dependencies which we also need to install if they are not already.</p>
<ul>
<li><a href="http://php.net/manual/en/book.xsl.php">XSL PHP</a></li>
<li><a href="http://michelf.com/projects/php-markdown/">PHP-Markdown</a> </li>
<li><a href="http://www.graphviz.org/">Graphviz</a></li>
</ul>
<p>Use your system&#39;s package manager to install the xsl php extension and graphviz.</p>
<pre><code>$ sudo aptitude install php5-xsl
$ sudo aptitude install graphviz
</code></pre>
<p>To install DocBlox and PHP-Markdown use PEAR.</p>
<pre><code>$ sudo pear channel-discover pear.michelf.com
$ sudo pear channel-discover pear.docblox-project.org
$ sudo pear install --alldeps channel://pear.docblox-project.org/DocBlox-0.13.1
</code></pre>
<h2>How Fast?</h2>
<p>DocBlox uses largely the same command line options as phpdoc, so it can act as a drop in replacement if you already have existing build scripts utilising phpdoc.</p>
<p>In terms of speed, well, it&#39;s very fast. That 35 minute phpdoc run took just 43 seconds in DocBlox.</p>
]]></description><link>http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</link><guid isPermaLink="true">http://aaronbonner.io/post/8695385627/docblox-the-speedy-alternative-to-phpdocumentor</guid><category><![CDATA[phpdoc]]></category><category><![CDATA[php]]></category><category><![CDATA[documentation]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 09 Aug 2011 17:44:00 GMT</pubDate></item><item><title><![CDATA[A Usecase for Git Stash]]></title><description><![CDATA[<p>A simple useful application for <a href="http://www.kernel.org/pub/software/scm/git/docs/git-stash.html">Git&#39;s stash feature</a> came about today when I started making some amendments to a repo master branch, forgetting I wasn&#39;t on my develop branch. I didn&#39;t want to make the changes to the master branch and I didn&#39;t want to have to copy the files between the two branches manually.</p>
<p>Git stash to the rescue:</p>
<pre><code>$ git stash save
$ git checkout develop
$ git stash apply
$ git commit -m &#39;Apply stashed changes&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</link><guid isPermaLink="true">http://aaronbonner.io/post/8515088051/a-usecase-for-git-stash</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:58:23 GMT</pubDate></item><item><title><![CDATA[Checkout a Remote Branch in Git]]></title><description><![CDATA[<p>If you have cloned a remote git repository you default into a checkout of the master branch. Odds are there will be other remote branches, and usually one called &#39;develop&#39;.</p>
<p>To start developing on this branch instead of the master, use the following command</p>
<pre><code>$ git checkout -b develop origin/develop
&gt; Branch develop set up to track remote branch develop from origin.
&gt; Switched to a new branch &#39;develop&#39;
</code></pre>
<p>This command creates a local branch tracking the remote develop branch and switches your working directory to this branch.</p>
<p>If you already have a local develop branch and want it to track the remote you can use this instead</p>
<pre><code>$ git checkout develop
$ git branch --set-upstream develop origin/develop
</code></pre>
]]></description><link>http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</link><guid isPermaLink="true">http://aaronbonner.io/post/8514870063/checkout-a-remote-branch-in-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 05 Aug 2011 14:49:00 GMT</pubDate></item><item><title><![CDATA[Installing the Zend Framework from PEAR]]></title><description><![CDATA[<p>To install the zend framework on your system globally (which means you don&#39;t have to bundle it in your local applications, plus you get the zf commandline tool) do the following:</p>
<pre><code>$ sudo pear channel-discover pear.zfcampus.org
$ sudo pear install zfcampus/ZF
</code></pre>
<p>When configuring applications, if you wish to use an application&#39;s bundled copy of ZF, ensure local libs come first in PHP&#39;s include_path.</p>
]]></description><link>http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</link><guid isPermaLink="true">http://aaronbonner.io/post/8381734479/installing-the-zend-framework-from-pear</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:43:44 GMT</pubDate></item><item><title><![CDATA[Cannot Build Git Project in Jenkins]]></title><description><![CDATA[<p>In Jenkins, when sourcing a project from a Git repository you may find your builds failing. This is due to a configuration problem, where your Jenkins user does not have a git identity set.</p>
<p>To solve the issue change into your jenkins home dir (/var/lib/jenkins on Ubuntu) and add the following to a file named .gitconfig.</p>
<pre><code>[user]
    name = Jenkins
    email = jenkins@localhost
</code></pre>
<p>That will allow you to avoid this:</p>
<blockquote>
<p>Started by user anonymous
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Using strategy: Default
Checkout:workspace / /var/lib/jenkins/jobs/Bookings/workspace - hudson.remoting.LocalChannel@289d9155
Cloning the remote Git repository
Cloning repository origin
Fetching upstream changes from file:///home/aaron/Sites/Bookings
Seen branch in repository origin/HEAD
Seen branch in repository origin/master
Commencing build of Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
Checking out Revision fc8a70bc628edb0ac547b3fefc0841db5e06204c (origin/HEAD, origin/master)
FATAL: Could not apply tag jenkins-Bookings-1
hudson.plugins.git.GitException: Could not apply tag jenkins-Bookings-1
	at hudson.plugins.git.GitAPI.tag(GitAPI.java:698)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1181)
	at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:1129)
	at hudson.FilePath.act(FilePath.java:758)
	at hudson.FilePath.act(FilePath.java:740)
	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1129)
	at hudson.model.AbstractProject.checkout(AbstractProject.java:1193)
	at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:555)
	at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:443)
	at hudson.model.Run.run(Run.java:1376)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:175)
Caused by: hudson.plugins.git.GitException: Error performing command: git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1
Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
<p>Run</p>
<p>  git config --global user.email &quot;<a href="mailto:you@example.com">you@example.com</a>&quot;
  git config --global user.name &quot;Your Name&quot;</p>
<p>to set your account&#39;s default identity.
Omit --global to set the identity only in this repository.</p>
<p>fatal: empty ident  &lt;jenkins@blizzard.(none)&gt; not allowed</p>
<pre><code>at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:744)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:709)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:719)
at hudson.plugins.git.GitAPI.tag(GitAPI.java:696)
... 12 more
</code></pre>
<p>Caused by: hudson.plugins.git.GitException: Command &quot;git tag -a -f -m Jenkins Build #1 jenkins-Bookings-1&quot; returned status code 128: 
*** Please tell me who you are.</p>
</blockquote>
]]></description><link>http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8381205328/cannot-build-git-project-in-jenkins</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 11:11:07 GMT</pubDate></item><item><title><![CDATA[Creating a PHP Project in Jenkins]]></title><description><![CDATA[<p>I want to look at how to get a very simple PHP project setup in Jenkins using the standard suite of tools and templates as demonstrated on php-jenkins.org.</p>
<p>If you do not have Jenkins installed or are not sure you have your environment setup correctly, please checkout my previous tutorial <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a>.</p>
<h2>A quick checklist</h2>
<p>Before we start, make sure you have your environment and Jenkins installation up and running and appropriately configured.</p>
<ul>
<li>You have a JVM and Jenkins installed running at <a href="http://localhost:8080">http://localhost:8080</a></li>
<li>You have the following Jenkins plugins installed<ul>
<li>checkstyle</li>
<li>cloverphp</li>
<li>dry</li>
<li>htmlpublisher</li>
<li>jdepend</li>
<li>plot</li>
<li>pmd</li>
<li>violations</li>
<li>xunit</li>
</ul>
</li>
<li>You have the following PEAR packages installed<ul>
<li>pdepend/PHP_Depend</li>
<li>phpmd/PHP_PMD</li>
<li>phpunit/phpcpd</li>
<li>phpunit/phploc</li>
<li>PHPDocumentor</li>
<li>PHP_CodeSniffer</li>
<li>phpunit/PHP_CodeBrowser</li>
<li>phpunit/PHPUnit</li>
<li>phpunit/ppw</li>
</ul>
</li>
<li>You have Ant installed</li>
<li>You have Git installed and are familiar with its basic use</li>
<li>You have the Zend_Framework installed globally as a PEAR package</li>
</ul>
<p>If you don&#39;t have the above installed please refer to the <a href="http://aaronbonner.tumblr.com/post/4965561040/getting-started-with-jenkins-for-php">Getting Started With Jenkins for PHP</a> tutorial. </p>
<p>If you&#39;re not familar with Git, have a quick read of <a href="git-scm.org">git-scm.org</a> or refer to <a href="http://aaronbonner.tumblr.com/post/5525556420/git-notes">my Git cheatsheet</a>.</p>
<p>To install the Zend Framework as a PEAR package see my post on <a href="http://aaronbonner.tumblr.com/post/8381734479/installing-the-zend-framework-from-pear">Installing Zend Framework from PEAR</a>.</p>
<h2>Create your project</h2>
<p>I&#39;m basing this tutorial on a sample Zend Freamework application, you&#39;re free to use whatever codebase you like. But for the purposes of this tutorial I&#39;m using this project as a starting point: <a href="mailto:git@github.com">git@github.com</a>:ajbonner/Bookings.git. </p>
<pre><code>$ git clone https://github.com/ajbonner/Bookings.git ./bookings
$ cd bookings
$ ppw --name &#39;Bookings&#39; --source ./application .
</code></pre>
<p>This will generate an ant build file with some general defaults for PHP_CodeSniffer and PHP Mess Detector along with a standard phpunit configuration file.</p>
<p>First thing, we&#39;ll need to make a small amendment to the phpunit configuration file, since we&#39;re testing a Zend Framework application, we need to make sure PHPUnit loads a custom bootstrap before running tests. Open up the generated phpunit.xml.dist file and change the opening phpunit element to look like this:</p>
<pre><code>&lt;phpunit backupGlobals=&quot;false&quot;
              backupStaticAttributes=&quot;false&quot;
              strict=&quot;true&quot;
              verbose=&quot;true&quot;
              colors=&quot;true&quot;
              bootstrap=&quot;tests/bootstrap.php&quot;&gt;
</code></pre>
<p>Our build environment is now configured correctly to run Zend Framework Test Cases, we have our ant build file setup and some defaults configured for our code coverage and analysis tools. Let&#39;s add our new build and phpunit configuration files to git and commit our changes. Then we can run ant and see what happens.</p>
<pre><code>$ git add build.xml phpunit.xml.dist
$ git commit -m &#39;Add build and test configuration files&#39;
$ ant
&gt;&gt; Buildfile: /home/aaron/Sites/Bookings/build.xml
&gt;&gt; clean:
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/api
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/logs
&gt;&gt;    [delete] Deleting directory /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/api
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/code-browser
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/coverage
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/logs
&gt;&gt;     [mkdir] Created dir: /home/aaron/Sites/Bookings/build/pdepend
&gt;&gt; 
&gt;&gt; parallelTasks:
&gt;&gt; 
&gt;&gt; phpcpd:
&gt;&gt; 
&gt;&gt; pdepend:
&gt;&gt;      [exec] PHP_Depend 0.10.5 by Manuel Pichler
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Parsing source files:
&gt;&gt;      [exec] ....                                                             4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Coupling-Analyzer:
&gt;&gt;      [exec]                                                                 18
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing CyclomaticComplexity-Analyzer:
&gt;&gt;      [exec] .                                                               24
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Dependency-Analyzer:
&gt;&gt;      [exec]                                                                 17
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing Inheritance-Analyzer:
&gt;&gt;      [exec]                                                                  5
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeCount-Analyzer:
&gt;&gt;      [exec]                                                                 14
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Executing NodeLoc-Analyzer:
&gt;&gt;      [exec] phpcpd 1.3.2 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] 0.00% duplicated lines out of 119 total lines of code.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 0 seconds, Memory: 2.75Mb
&gt;&gt;      [exec] .                                                               22
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating pdepend log files, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 00:00; Memory: 9.50Mb
&gt;&gt; 
&gt;&gt; phpcs:
&gt;&gt; 
&gt;&gt; phpmd:
&gt;&gt;      [exec] Result: 1
&gt;&gt; 
&gt;&gt; phploc:
&gt;&gt;      [exec] phploc 1.6.1 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Directories:                                          1
&gt;&gt;      [exec] Files:                                                4
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Lines of Code (LOC):                                119
&gt;&gt;      [exec]   Cyclomatic Complexity / Lines of Code:           0.08
&gt;&gt;      [exec] Comment Lines of Code (CLOC):                        18
&gt;&gt;      [exec] Non-Comment Lines of Code (NCLOC):                  101
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Namespaces:                                           0
&gt;&gt;      [exec] Interfaces:                                           0
&gt;&gt;      [exec] Classes:                                              4
&gt;&gt;      [exec]   Abstract:                                           0 (0.00%)
&gt;&gt;      [exec]   Concrete:                                           4 (100.00%)
&gt;&gt;      [exec]   Average Class Length (NCLOC):                      22
&gt;&gt;      [exec] Methods:                                              9
&gt;&gt;      [exec]   Scope:
&gt;&gt;      [exec]     Non-Static:                                       9 (100.00%)
&gt;&gt;      [exec]     Static:                                           0 (0.00%)
&gt;&gt;      [exec]   Visibility:
&gt;&gt;      [exec]     Public:                                           8 (88.89%)
&gt;&gt;      [exec]     Non-Public:                                       1 (11.11%)
&gt;&gt;      [exec]   Average Method Length (NCLOC):                     10
&gt;&gt;      [exec]   Cyclomatic Complexity / Number of Methods:       1.89
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Anonymous Functions:                                  0
&gt;&gt;      [exec] Functions:                                            0
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Constants:                                            0
&gt;&gt;      [exec]   Global constants:                                   0
&gt;&gt;      [exec]   Class constants:                                    0
&gt;&gt; 
&gt;&gt; phpunit:
&gt;&gt;      [exec] PHPUnit 3.5.14 by Sebastian Bergmann.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] ...
&gt;&gt;      [exec] .
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Time: 1 second, Memory: 19.25Mb
&gt;&gt;      [exec] 
&gt;&gt;             OK (4 tests, 16 assertions)
&gt;&gt; 
&gt;&gt;      [exec] Writing code coverage data to XML file, this may take a moment.
&gt;&gt;      [exec] 
&gt;&gt;      [exec] Generating code coverage report, this may take a moment.
&gt;&gt; 
&gt;&gt; phpcb:
&gt;&gt; 
&gt;&gt; build:
&gt;&gt; 
&gt;&gt; BUILD SUCCESSFUL
&gt;&gt; Total time: 3 seconds
&gt;&gt; 
</code></pre>
<p>If you have everything setup correctly you will have a build directory with a number of artifacts: code coverage analysis, code style reports, dependency analysis, etc. You can look at these directly but it&#39;s much more convienient to use Jenkins to view these artifacts.</p>
<h2>Importing a Project into Jenkins</h2>
<p>We have a working build and we are producing a number of artifacts representing certain aspects of our software. Time now to import the project into Jenkins and have this process ran automatically.</p>
<h3>Step 1 - Create a New Jenkins Project</h3>
<p>As part of getting our environment ready for Jenkins, we installed a PHP Project Template. To create our new Project, we copy this template, and give our project a name. Choose whatever you like, but for the purposes of this tutorial, I&#39;ve chosen &#39;Bookings&#39;.</p>
<p><img src="/images/tumblr_lpatcuLVWU1qac7a1.png" alt=""></p>
<h3>Step 2 - Configure your New Project</h3>
<p>You&#39;re now presented with a (at first glance) daunting configuration page. We&#39;ll work our way down the page, for the most part we don&#39;t need to make major changes.</p>
<p>Firstly the pdepend task generates a pair of SVG images, in order to have these display on the project dashboard we need to substitute &#39;job-name&#39; to &#39;Bookings&#39; (or whatever you chose for your project name) in the embed tags in the description field.</p>
<p><img src="/images/tumblr_lpatdc5o631qac7a1.png" alt=""></p>
<p>Next, uncheck &#39;disable build&#39;, naturally we want to perform builds!</p>
<p>The project will be retrieved from the git repository we created during the initial build configuration. Pass in the absolute filesystem path to your repository, for example mine is file:///home/aaron/Sites/Bookings.</p>
<p><img src="/images/tumblr_lpatdsJ4KF1qac7a1.png" alt=""></p>
<p>That&#39;s it. You now have setup the project in Jenkins and defined a source. The remaining details can be left at their defaults. Scroll down to the bottom of the page and click the save button.</p>
<p><img src="/images/tumblr_lpate3P5p31qac7a1.png" alt=""></p>
<h3>Step 3 - Build the Project</h3>
<p>The project needs to first do a build before the project workspace can be initialised. On the left hand project menu is a &#39;build now&#39; link. Click that and Jenkins will checkout a copy of our project from git then process the ant build.xml we defined.</p>
<p><img src="/images/tumblr_lpatepVoOU1qac7a1.png" alt=""></p>
<p>Once the build is complete you can click on the build and take in all the dianostic and code analysis artifacts generated during the build.</p>
<p><img src="/images/tumblr_lpatexgdGa1qac7a1.png" alt=""></p>
<p>Possibly, you may run into an issue building your project from git. A common cause is Jenkins being unable to find a user identity for its operating system user. Please see my post <a href="http://aaronbonner.tumblr.com/post/8381205328/cannot-build-git-project-in-jenkins">Cannot Build Git Project in Jenkins</a> if this happens to you.</p>
<h2>What Next?</h2>
<p>Continuous integration isn&#39;t about pretty graphs and pithy build summaries. It&#39;s about maintaining working software and highlighting issues as soon as possible. Jenkins has a wealth of plugins that help this process, but perhaps the first one to checkout is the email functionality, letting your development team know immediately if something is broken.</p>
]]></description><link>http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</link><guid isPermaLink="true">http://aaronbonner.io/post/8380014964/creating-a-php-project-in-jenkins</guid><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 02 Aug 2011 09:58:00 GMT</pubDate></item><item><title><![CDATA[Working with Jenkins from the CLI]]></title><description><![CDATA[<p>Once you have <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Jenkins up and running</a> you can manage most administrative tasks with a handy cli jar file or by using simple http requests.</p>
<p>Assuming you&#39;re running your server on port 8080, obtain the cli jar file like this:</p>
<pre><code>$ wget http://localhost:8080/jnlpJars/jenkins-cli.jar
</code></pre>
<p>To see a list of available commands simply run the tool with the help argument:</p>
<pre><code>$ java -jar jenkins-cli.jar -s http://localhost:8080 help
&gt; build
&gt;   Builds a job, and optionally waits until its completion.
&gt; clear-queue
&gt;   Clears the build queue
&gt; connect-node
&gt;   Reconnect to a node
&gt; copy-job
&gt;   Copies a job
&gt; ...
</code></pre>
<p>In recent versions of Jenkins, typical server operations have been decoupled from the cli tool and are now issued using simple http requests. For example to reload Jenkins instance&#39;s configuration, you would just fire a http request at it like this:</p>
<pre><code>$ curl http://localhost:8080/reload
</code></pre>
<p>There are three commands of this sort:</p>
<ul>
<li>reload Reload server configuration</li>
<li>restart Restart the server</li>
<li>exit Close the server down</li>
</ul>
<p>Issue these in the format:</p>
<pre><code>$ curl http://[jenkins-server]/[command]
</code></pre>
]]></description><link>http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</link><guid isPermaLink="true">http://aaronbonner.io/post/8341292484/working-with-jenkins-from-the-cli</guid><category><![CDATA[jenkins]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 15:07:00 GMT</pubDate></item><item><title><![CDATA[Installing Jenkins on Ubuntu]]></title><description><![CDATA[<p>I just wanted to write up a quick tutorial for installing <a href="http://jenkins-ci.org/">Jenkins CI</a> on modern flavours of Ubuntu.</p>
<p>There&#39;s two options, you can download a <a href="http://pkg.jenkins-ci.org/debian/">pre-prepped specific package</a> and install using aptitude or synaptic, <em>or</em> you can set-up an update site to ensure you&#39;re always notified of updates.</p>
<p>I prefer the latter and used the following steps:</p>
<pre><code>$ wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
$ sudo echo &#39;deb http://pkg.jenkins-ci.org/debian binary/&#39; &gt;&gt; /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install jenkins
</code></pre>
<p>By default, the Jenkins packages attempt to start up a java web container running on port 8080. To change this, open up /etc/default/jenkins and look for the following:</p>
<pre><code># port for HTTP connector (default 8080; disable with -1)
HTTP_PORT=8080
</code></pre>
<p>Change the port number to whatever works for you and (re)start the service</p>
<pre><code>$ sudo service jenkins restart
</code></pre>
]]></description><link>http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</link><guid isPermaLink="true">http://aaronbonner.io/post/8339092868/installing-jenkins-on-ubuntu</guid><category><![CDATA[jenkins]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:17:00 GMT</pubDate></item><item><title><![CDATA[Magento Caches Database Columns]]></title><description><![CDATA[<p>Just had to &#39;reblog&#39; this, as of all the caching quirks in Magento, I have binned the most time on account of this one.</p>
<p><a href="http://magento-quickies.tumblr.com/post/8135152231" class="tumblr_blog">magento-quickies</a>:</p>

<blockquote><p>When you&#8217;re creating custom Magento models, it&#8217;s common to  to add and remove fields to and from a database table during development before <em>finalizing</em> a set of columns for the setup resource/migration file.  One thing to keep in mind while you&#8217;re doing this is that Magento will cache the list of table columns each model knows about, which means a cache clearing will be needed after adding a column.</p></blockquote>

<p></p>]]></description><link>http://aaronbonner.io/post/8338979329/magento-caches-database-columns</link><guid isPermaLink="true">http://aaronbonner.io/post/8338979329/magento-caches-database-columns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 01 Aug 2011 13:11:00 GMT</pubDate></item><item><title><![CDATA[Listing Large MySQL Tables in a Database]]></title><description><![CDATA[<p>Over the past couple of months one of my clients started noticing backups were taking a long time and the size of the resulting backups was becoming an issue. To help identify exceptionally large tables I made use of MySQL&#39;s INFORMATION_SCHEMA feature and the following query worked very well.</p>
<script src="https://gist.github.com/1082541.js?file=tablesize_report.sql"></script>]]></description><link>http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</link><guid isPermaLink="true">http://aaronbonner.io/post/7613627958/listing-large-mysql-tables-in-a-database</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 14 Jul 2011 14:21:00 GMT</pubDate></item><item><title><![CDATA[Resuming a Failed Sftp/Scp Operation using Rsync]]></title><description><![CDATA[<p>Rsync is probably one of my favourite unix tools, you can use it for backups, for application deployments, for tree comparisons and so on. It&#39;s an infinitely flexible and helpful tool.</p>
<p>Another great thing it can do, is resume broken file transfers e.g. from a sftp, scp, http or ftp transfer.</p>
<pre><code>rsync --partial --progress --rsh=ssh localfile remoteuser@remotehost.com:/remote/path
</code></pre>
]]></description><link>http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</link><guid isPermaLink="true">http://aaronbonner.io/post/7081006278/resuming-a-failed-sftpscp-operation-using-rsync</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Jun 2011 13:48:00 GMT</pubDate></item><item><title><![CDATA[Eclipse Randomly Crashes in Ubuntu 11.04]]></title><description><![CDATA[<p>I&#39;ve suffered more from this running Eclipse Indigo on a Natty Virtual Machine, but I occasionally get it on my bare metal desktop too. Basically you&#39;ll be merrily hacking away and Eclipse will suddenly crash. Hard. It&#39;s as if the application window has entered the Bermuda Triangle.</p>
<p>There is some trace though, a hs_* dump file in your home directory. In that file is a message (along with a lot of other verbiage) like this</p>
<pre><code># Problematic frame:
# C [libswt-pi-gtk-3659.so+0x3c17e]
Java_org_eclipse_swt_internal_gtk_OS_GTK_1ACCEL_1LABEL_1GET_1ACCEL_1STRING+0x0
</code></pre>
<p>Not really helpful but it helped google lead me to a solution. That solution is here: <a href="https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123">https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/789123</a> (see comment 18).</p>
<p>To summarise, you basically need to uninstall overlay-scrollbar and liboverlay-scrollbar-0.1-0.</p>
<p>This can be achieved either using the package manager or aptitude</p>
<pre><code>sudo aptitude remove overlay-scrollbar liboverlay-scrollbar-0.1-0
</code></pre>
<p>This has the added benefit of removing the annoying scrollbars that come with Natty.</p>
]]></description><link>http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/7039686960/eclipse-randomly-crashes-in-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 29 Jun 2011 08:44:00 GMT</pubDate></item><item><title><![CDATA[Sending Attachments with Zend_Mail]]></title><description><![CDATA[<p>Quick example on how to easily send a file using Zend Framework&#39;s Zend_Mail class.</p>
<script src="https://gist.github.com/1051424.js"> </script>]]></description><link>http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</link><guid isPermaLink="true">http://aaronbonner.io/post/7012672962/sending-attachments-with-zendmail</guid><category><![CDATA[php]]></category><category><![CDATA[zend]]></category><category><![CDATA[zend framework]]></category><category><![CDATA[zend_mail]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 15:43:00 GMT</pubDate></item><item><title><![CDATA[Mastering Git Basics By Tom Preston-Werner]]></title><description><![CDATA[<p>Hour long tutorial going over the basics of using git. Great for beginners, and instructive for intermediate users looking to fill in gaps in their knowledge about Git. Tom Preston-Werner is co-founder of Github.</p>
]]></description><link>http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</link><guid isPermaLink="true">http://aaronbonner.io/post/7005980683/mastering-git-basics-by-tom-preston-werner</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 28 Jun 2011 08:44:11 GMT</pubDate></item><item><title><![CDATA[Zend Framework 2 Design Patterns]]></title><link>http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</link><guid isPermaLink="true">http://aaronbonner.io/post/6901688428/zend-framework-2-design-patterns</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 25 Jun 2011 13:33:39 GMT</pubDate></item><item><title><![CDATA[Getting Image Dimensions in Ruby]]></title><description><![CDATA[<p>I wanted to see which images I had in a directory that were a certain set resolution and decided to see how easy it was to do in ruby.</p>
<p>Turns out with a little</p>
<pre><code>gem install image_size
</code></pre>
<p>It&#39;s pretty easy!</p>
<script src="https://gist.github.com/1045193.js"></script>]]></description><link>http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</link><guid isPermaLink="true">http://aaronbonner.io/post/6869644530/getting-image-dimensions-in-ruby</guid><category><![CDATA[ruby]]></category><category><![CDATA[snippet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 24 Jun 2011 16:59:00 GMT</pubDate></item><item><title><![CDATA[Two Essential Update Sites for Eclipse Indigo]]></title><description><![CDATA[<p>With Eclipse Indigo finally <a href="http://www.eclipse.org/downloads/">released</a> it also means time to reinstall our favourite extensions and plugins.</p>
<p>Personally, I maintain a fairly vanilla setup. I use the PHP Developer Tools, E/JGIt plugins and Subclipse for SVN.</p>
<p>To get hold of these packages add the following two update sites after installing indigo.</p>
<ul>
<li><a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a> - Egit and PDT 3</li>
<li><a href="http://subclipse.tigris.org/update_1.6.x">http://subclipse.tigris.org/update_1.6.x</a> - Subclipse SVN (much better than Subversive)</li>
</ul>
]]></description><link>http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6793073974/two-essential-update-sites-for-eclipse-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[indigo]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 16:20:00 GMT</pubDate></item><item><title><![CDATA[Debugging Magento Payment Methods]]></title><description><![CDATA[<p>Foolishly, when working on a recent gateway implementation (usaepay) I wrote a custom logging function to keep track of what was happening. </p>
<p>Turns out there&#39;s already something there to do it </p>
<pre><code>Mage_Payment_Method_Abstract::_debug($data);
</code></pre>
<p>If you want to call it from outside the payment_method inheritance tree use</p>
<pre><code>Mage_Payment_Method_Abstract::debugData($data);
</code></pre>
<p>In both cases your payment method needs to have its debug config setting enabled e.g. for my usaepay module </p>
<pre><code>echo Mage::getStoreConfig(&#39;payment/usaepay/debug&#39;); 
&gt;&gt; 1
</code></pre>
]]></description><link>http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</link><guid isPermaLink="true">http://aaronbonner.io/post/6785010657/debugging-magento-payment-methods</guid><category><![CDATA[magento]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 22 Jun 2011 08:05:31 GMT</pubDate></item><item><title><![CDATA[Adding existing code to a bare remote git repository]]></title><description><![CDATA[<p>If you have an existing codebase and it&#39;s already in a git repository there&#39;s a number of ways to get it into github (or any other remote git repository). Many of these involve losing your branch history and creating a brand new repository.</p>
<p>This is where the <em>git remote add</em> command comes in handy.</p>
<pre><code>$ git remote add origin git@github.com:ajbonner/foo.git
</code></pre>
<p>You can see the remotes associated with your branch:</p>
<pre><code>$ git remote -v
&gt;&gt; origin	git@github.com:ajbonner/foo.git (fetch)
&gt;&gt; origin	git@github.com:ajbonner/foo.git (push)
</code></pre>
<p>To push the existing code to the remote:</p>
<pre><code>$ git push origin master
</code></pre>
<p>You now have set up the remote and pushed your master branch into it. From here it gets tricky because subsequent git pull requests will give you an ugly error</p>
<pre><code>$ git pull
&gt;&gt; You asked me to pull without telling me which branch you
want to merge with, and &#39;branch.master.merge&#39; in
your configuration file does not tell me, either. Please
specify which branch you want to use on the command line and
try again (e.g. &#39;git pull &lt;repository&gt; &lt;refspec&gt;&#39;).
See git-pull(1) for details.

If you often merge with the same branch, you may want to
use something like the following in your configuration file:

[branch &quot;master&quot;]
remote = &lt;nickname&gt;
merge = &lt;remote-ref&gt;

[remote &quot;&lt;nickname&gt;&quot;]
url = &lt;url&gt;
fetch = &lt;refspec&gt;

See git-config(1) for details.
</code></pre>
<p>There a few ways to get around this:</p>
<ol>
<li><p>Follow the directions given by git and edit your gitconfig </p>
</li>
<li><p>Clone a fresh copy of the master branch from the remote </p>
</li>
<li><p>When defining the remote add the --track option and give it the name of the master branch </p>
<p> $ git remote add --track master origin <a href="mailto:git@github.com">git@github.com</a>:ajbonner/foo.git</p>
</li>
</ol>
<p>4 Refer to the remote branch using --set-upstream</p>
<pre><code>$ git branch --set-upstream master origin/master
</code></pre>
<p>Personally I find option number 4 the best with the least amount of work.</p>
<p>You can also use:</p>
<pre><code>$ git config --global branch.autosetupmerge true
</code></pre>
<p>To avoid having to do this.</p>
]]></description><link>http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</link><guid isPermaLink="true">http://aaronbonner.io/post/6751224485/adding-existing-code-to-a-bare-remote-git</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 21 Jun 2011 08:54:00 GMT</pubDate></item><item><title><![CDATA[Getting started with EcomDev_PHPUnit and Magento]]></title><description><![CDATA[<p>Magento is a complicated piece of ecommerce software that has been in the past notoriously difficult to employ TDD practices on. Luckily in the past few months we&#39;ve had <a href="http://twitter.com/#!/alistairstead">Alistair Stead&#39;s</a> <a href="https://github.com/ibuildings/Mage-Test">Mage_Test</a> and now <a href="http://twitter.com/#!/IvanChepurnyi">Ivan Chepurnyi&#39;s</a> <a href="http://www.ecomdev.org/shop/php-unit-test-suite.html">EcomDev PHPUnit Suite</a>  testing frameworks released, to make this a little less difficult.</p>
<p>On my most recent project I exclusively employed Mage_Test for my Unit Testing and with a few exceptions, it&#39;s performed extremely well. </p>
<p>Mage_Test takes a very hands-off, light-weight approach. EcomDev meanwhile, on the surface, appears to provide far greater support for testing at the expense of some complexity.</p>
<p>Installing EcomDev&#39;s Test module is easy. You can <a href="http://www.magentocommerce.com/magento-connect/Ecommerce%20Developers/extension/5717/ecomdev_phpunit">install it from Magento Connect</a> or you can get the module directly from the developer&#39;s Subversion repository</p>
<pre><code>$ svn co http://svn.ecomdev.org/svn/ecomdev-phpunit/trunk
</code></pre>
<p>Once you have the code, it&#39;s just a matter of copying into your magento store as with any other module:</p>
<pre><code>$ cp -r &lt;ecomdevtestdir&gt;/* path/to/your/magento/rootdir
</code></pre>
<p>Once the files are in place, you&#39;ll need to edit app/etc/local.xml.phpunit and supply some details for a test database connection and some path information relating to the store site root URIs.</p>
<p>Once your database and paths have been defined initialise the test database by changing into your store root and running the UnitTests.php test suite</p>
<pre><code>~/Sites/my/store/root: $ phpunit UnitTests.php
</code></pre>
<p>This will take some time (one of my clients has a 400meg database and a lot of orders, it took ~ 4 minutes).</p>
<p>You&#39;re now ready to start writing some tests.</p>
<p>For more information on EcomDev&#39;s PHPUnit module see this <a href="http://www.ecomdev.org/2011/05/24/ecomdev_phpunit-manual-version-0-2-0-is-available-for-download.html">blog post</a> or get a copy of the (very) comprehensive <a href="http://bit.ly/mR6uKc">EcomDev PHPUnit 0.20 Manual</a>
    </p>
]]></description><link>http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/6383667465/getting-started-with-ecomdevphpunit-and-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 13:37:00 GMT</pubDate></item><item><title><![CDATA[Mysqldump tables matching a pattern]]></title><description><![CDATA[<p>Yesterday on Twitter someone asked if there was a way to export data from mysql, but only from tables matching a like pattern. E.g. something like</p>
<pre><code>mysqldump -uuser -p mydb mytables_*
</code></pre>
<p>There isn&#39;t an inbuilt mechanism to do this, my reply was to use a shell script with an array containing a list of tables you wanted to export. Another reply had a better way, using a call to mysql to get a list of tables matching a glob pattern, putting them in an array and then iterating over that array with successive calls to mysqldump.</p>
<p>I put this suggestion together with mine, made it more generic and popped it on github for reference.</p>
<p>Personally I&#39;ve placed this in my user&#39;s .bash_functions (included by .bashrc).</p>
<script src="https://gist.github.com/1016920.js?file=mysqldump_bypattern.sh"></script>]]></description><link>http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</link><guid isPermaLink="true">http://aaronbonner.io/post/6380590863/mysqldump-tables-matching-a-pattern</guid><category><![CDATA[mysql]]></category><category><![CDATA[mysqldump]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 10 Jun 2011 10:05:00 GMT</pubDate></item><item><title><![CDATA[Installing the Git plugin on Eclipse Indigo RC3/4+]]></title><description><![CDATA[<p><em><em>Edited</em> 10th June &#39;11 - use update site: <a href="http://download.eclipse.org/egit/updates">http://download.eclipse.org/egit/updates</a></em></p>
<p>There is an issue at the moment when you try and install the egit/jgit plugins with Indigo versions RC3 and above.</p>
<p>You&#39;ll encounter an error similar to this:</p>
<blockquote>
<p> An error occurred while collecting items to be installed
 session context was:(profile=epp.package.rcp,
 phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=,
 action=).
 No repository found containing:
 osgi.bundle,org.eclipse.egit,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.core,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.doc,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.egit.ui,1.0.0.201106011211-rc3
 No repository found containing:
 osgi.bundle,org.eclipse.jgit,1.0.0.201106011211-rc3</p>
</blockquote>
<p>This is caused by the latest jgit packages not yet being in the indigo release p2 update repository. To get these installed until these updates are provided, add <a href="http://download.eclipse.org/egit/staging/">http://download.eclipse.org/egit/staging/</a> as an update site and you&#39;ll be able to install these plugins successfully.</p>
<p>I found this solution here: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183">https://bugs.eclipse.org/bugs/show_bug.cgi?id=348183</a> </p>
]]></description><link>http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</link><guid isPermaLink="true">http://aaronbonner.io/post/6319424315/installing-the-git-plugin-on-eclipse-indigo-rc34</guid><category><![CDATA[eclipse]]></category><category><![CDATA[git]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 08 Jun 2011 13:53:00 GMT</pubDate></item><item><title><![CDATA[Hash Access with Symbols for the Ruby Newbie]]></title><description><![CDATA[<p>In PHP it&#39;s very common to use a variable as an associative array key:</p>
<pre><code>$keys = array(&#39;mykey&#39;, &#39;another_key&#39;);
$array = array();
foreach ($keys AS $key) { $array[$key] = &quot;hello world\n&quot;; }
foreach ($keys AS $key) { echo $array[$key]; }

&gt;&gt; hello world
hello world
</code></pre>
<p>In Ruby, the presence (and use of Symbols) makes this a bit tricky:</p>
<pre><code>keys = [&#39;mykey&#39;, &#39;another_key&#39;]
myarray = { :mykey =&gt; &#39;hello world&#39;, &#39;another_key&#39; =&gt; &#39;goodbye world&#39; }
myarray.each { |k| 
  puts myarray[k]
}
 
&gt;&gt; hello world
goodbye world
</code></pre>
<p>Now if we try this again using instead the Strings from the keys array we get a different result:</p>
<pre><code>keys.each { |k| 
  puts myarray[k]
}
 
&gt;&gt;
goodbye world
</code></pre>
<p>We don&#39;t get the first array value back because a String key is not the same as a Symbol key, even if they consist of the same sequence of characters. This is one of the gotchyas with Ruby. </p>
<p>In Ruby the &#39;value of a Symbol is not the same as that of a String. So :key != &quot;key&quot;. From an ease of use perspective it would be convenient if they did.</p>
<p>Thankfully the language stewards saw fit to include String.to_sym as a convenience method to create a Symbol from a String&#39;s value. </p>
<pre><code>keys.each { |k| 
  puts myarray[k.to_sym]
}
 
&gt;&gt; hello world
</code></pre>
]]></description><link>http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</link><guid isPermaLink="true">http://aaronbonner.io/post/6068510492/hash-access-with-symbols-for-the-ruby-newbie</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 01 Jun 2011 10:43:00 GMT</pubDate></item><item><title><![CDATA[Installing PDT 3 on Eclipse 3.7 Indigo ]]></title><description><![CDATA[<p>It&#39;s very simple right now to get the milestone builds of the PHP Developer Tools (PDT) 3 up and running (and a significant improvement on the current Helios SR2 release).</p>
<p>Pull down the &#39;classic&#39; version Eclipse 3.7 Indigo from <a href="http://www.eclipse.org/downloads/">http://www.eclipse.org/downloads/</a> and install.</p>
<p>Once installed, launch Eclipse and navigate to Help-&gt;Install new Software.</p>
<p>Add the Indigo update site &#39;<a href="http://download.eclipse.org/releases/indigo">http://download.eclipse.org/releases/indigo</a>&#39;. This will take sometime to add, let it go for 5 or so minutes.</p>
<p><strong>UPDATE 20/10/2011</strong>
The following step is no longer required as the the PDT 3.0 series is now in the main indigo update repository. <s>Once the Indigo Update Site is added, add the PDT 3.0 Update Site <a href="http://download.eclipse.org/tools/pdt/updates/3.0/milestones/">http://download.eclipse.org/tools/pdt/updates/3.0/milestones/</a></s>
<img src="/images/tumblr_lm26jreMXP1qac7a1.png" alt=""></p>
<p>Now, to install simply select PDT Development Tools All in One SDK (leave the others unselected) and click next. The installation process shouldn&#39;t take more than a few minutes.
<img src="/images/tumblr_lm26k2v8Bp1qac7a1.png" alt="">.</p>
]]></description><link>http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</link><guid isPermaLink="true">http://aaronbonner.io/post/6035060125/installing-pdt-3-on-eclipse-37-indigo</guid><category><![CDATA[eclipse]]></category><category><![CDATA[pdt]]></category><category><![CDATA[indigo]]></category><category><![CDATA[ide]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 May 2011 11:58:00 GMT</pubDate></item><item><title><![CDATA[Understanding Ruby Symbols]]></title><description><![CDATA[<p>In these formative stages of my experience with Ruby, one feature of the language that I keep feeling on uneven ground with are Symbols.</p>
<p>I had a feeling I should give them respect as much of the learning Ruby literature says to basically ignore them for now, or launch into lengthy defences of why they exist (usually forgetting to explain how they work).</p>
<p>In case you&#39;re unfamiliar with Ruby syntax, a Symbol looks like this </p>
<pre><code>:a_symbol
</code></pre>
<p>With the numerous ways of defining variables in Ruby this is another variable looking constuct, except that it isn&#39;t actually a variable at all (you cannot assign to it).</p>
<p>So what <em>is</em> a Symbol, actually. To be honest I&#39;m still struggling for a good simple definition. I think the best way to think of it, is as a constant (i.e. immutable) placeholder for the name of something. Or more practically put, a colon in front of the string you want to use for a hashkey. Agile Programming With Rails 4th Ed puts it simply &#39;a Symbol is a name for something&#39;.</p>
<p>I wrote on Twitter that I felt Symbols were extra syntax put in to solve a problem a programmer in a high level interpreted language shouldn&#39;t have to worry about. Memory management. In Ruby everything is an object, and objects are much bigger than the primitives you have in Java, C or even PHP. A string for example in PHP (and as in C) is still ultimately a sequence of bytes stored in contiguous memory addresses. In Ruby, a string is always an object, and that requires a fair chunk of memory to represent. If you have a bunch of hash tables, using objects as a hashkey is  inefficient and wasteful.</p>
<p>Symbols help solve this inefficiency. A Symbol is still an object, but a special one. It has few methods the main ones being to get its string value and another its integer value, it&#39;s immutable and there&#39;s only ever one copy of it. So it&#39;s much more efficient to use a symbol as a hashkey than a Ruby string. In Ruby (and most OO languges) two strings even if consisting of the same sequence of characters are different objects. In Ruby two symbols of the same sequence of characters are the <em>same</em> object. In large applications this feature can save a tremendous amout of memory.</p>
<p>The other key characteristic of Symbols is immutability. Symbols cannot be assigned to, they just are. In a language that lacks a true constant construct (uppercasing a variable name is the convention for defining a Ruby constant but read-only access isn&#39;t enforced at runtime, you can write to what Ruby calls a &#39;constant&#39;), Symbols can be useful.</p>
<p>So are Symbols a good language feature? At this point with my lack of experience with Ruby I don&#39;t really feel qualified to answer that yet definitively. My gut though, says no. I feel in a high level language the need for extra syntax to optimise code adds an unnecessary burden on the programmer. They seem so out of place in a language that works so hard to strip away unneccessary syntax. Symbols, to me detract from Ruby&#39;s power to define elegant and natural sounding expressions. </p>
<p>The difficulty authors have in describing what Symbols are, how to use them and why they should be used seems like a language design smell to me. I dare say as I become more familiar and accustomed to the presence and use of Symbols I&#39;ll learn to accept them. But as a Developer new to Ruby, they seem out of place.</p>
<p>I haven&#39;t gone into an excessive amount of detail about the nature and application of Symbols. For that the two best resources I&#39;ve found explaining are <a href="http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols">http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols</a> and <a href="http://www.troubleshooters.com/codecorn/ruby/symbols.htm">http://www.troubleshooters.com/codecorn/ruby/symbols.htm</a>.</p>
]]></description><link>http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</link><guid isPermaLink="true">http://aaronbonner.io/post/5960999188/understanding-ruby-symbols</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 29 May 2011 10:41:00 GMT</pubDate></item><item><title><![CDATA[Commandline Quicklook in OSX]]></title><description><![CDATA[<pre><code>$ alias ql=&#39;qlmanage -p &quot;$@&quot; &gt;&amp; /dev/null&#39;
</code></pre>
]]></description><link>http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/5671683808/commandline-quicklook-in-osx</guid><category><![CDATA[osx]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 18:34:16 GMT</pubDate></item><item><title><![CDATA[Optimising Backups with MySQL]]></title><description><![CDATA[<p>Working with PHP websites you’ll regularly need to export/import copies of MySQL databases, whether for testing and debug purposes or at a minimum creating and restoring backups. As database sizes increase this poses risks particularly, for example, with large innodb based applications like Magento where database sizes can easily go into the gigabytes.</p>&#13;
<h4>The Basics</h4>&#13;
<p>The process for creating and restoring a snapshot is trivial</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase &gt; mydump.sql    # export&#13;
$ mysql -uuser -p mydatabase &lt; mydump.sql        # import&#13;
</code></pre>&#13;
<p>For the most part this works as you expect it to, and for small databases this is probably all that is needed. You may find the resultant .sql file is huge, it is after all uncompressed text, at 1 byte per character if the output is ANSI or up to 4 bytes if your character set is UTF-8. Bzip2 will bring the file size down a considerable amount but it’s also considerably slower than gzip.</p>&#13;
<p>It’s sometimes tempting to gzip/bzip2 your datadump while performing the mysqldump in a single line.</p>&#13;
<pre><code>$ mysqldump -uuser -p mydatabase | bzip2 -c &gt; mydump.sql.bz2&#13;
</code></pre>&#13;
<p>While it seems a nice efficient way to do your backup, this should be avoided as (by default in MyISAM) you’re locking tables and denying other clients access to them. InnoDB implements row level locking which is slightly less offensive, but still should be avoided as much as possible.</p>&#13;
<p>When importing a large database, the choice of zip format is important. You have to trade off decompression speed, with filesize. The extra cputime consumed decompressing a bzip2 datadump may actually be less preferable to a few extra megabytes gained by using the faster gzip. Whatever your choice, importing a zipped datadump is very easy.</p>&#13;
<pre><code>$ gunzip -c mydump.sql.gz | mysql -uuuser -p mydatabase  # importing a gzipped datadump&#13;
$ bunzip2 -c mydump.sql.bz2 | mysql -uuuser -p mydatabase # importing a bzip2 datadump&#13;
</code></pre>&#13;
<h4>Locking and Transactional Integrity</h4>&#13;
<p>As explained briefly above, MySQL’s storage engines come with some limitations. In the worst case, with MyISAM, while performing a mysqldump entire tables will be locked, that means other clients will not be permitted to write to a table while the dump is being performed. If you have large MyISAM tables this poses clear problems when backing up a running application.</p>&#13;
<p>InnoDB is slightly better because it uses row level locking. It locks only the rows affected by a query, and not the whole table. This makes a conflict far less likely to occur while performing a backup on a running application. InnoDB as a transactional storage engine does allow for the possibility that active transactions may be underway while you’re attempting your backup.</p>&#13;
<p>Two options we can pass to mysqldump mitigate these issues are, —single-transaction and —skip-lock-tables.</p>&#13;
<pre><code>$ mysqldump --single-transaction --skip-lock-tables mysql -uuser -p mydatabase&#13;
</code></pre>&#13;
<p>The use of —single-transaction means mysql issues a begin statement before dumping the contents of a table, ensuring a consistent state of the table without blocking other applications. It means writes can occur while the backup is taking place and this will not affect the backup. The —skip-lock-tables option stops MyISAM tables being locked during the backup. This does mean the integrity of the table can be lost as writes occur to them during the backup process. The risk is weighed up against the risk of blocking access to the table during a lengthy backup process.</p>&#13;
<h4>Improving Import Performance</h4>&#13;
<p>Choice of zip format will have a large bearing on import performance. Gzip is appreciably faster than bzip2. Other options you can pass to mysqldump to improve import performance are —disable-keys and —no-autocommit.</p>&#13;
<p>Disabling keys significantly improves the performance of imports as mysql will only index the table at the end of the import. With keys enabled, the index is updated after each row is inserted. Given you are performing a batch import, this is suboptimal.</p>&#13;
<p>By default each statement in an InnoDB table is autocommitted. This comes with unneccessary overhead when performing a batch import as you really only need to commit once the table has been fully imported.</p>&#13;
<h4>Further Reading</h4>&#13;
<p>This only a brief look at using Mysqldump for backups. It’s a common enough development task that all developers should take the time to see how it can be best leveraged for their environment. There’s plenty of documentation out there on using the tool. But the best place to start is with the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html">official docs</a>.</p> ]]></description><link>http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/5662779872/optimising-backups-with-mysql</guid><category><![CDATA[mysql]]></category><category><![CDATA[php]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[sysadmin]]></category><category><![CDATA[backups]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 20 May 2011 08:40:39 GMT</pubDate></item><item><title><![CDATA[Regexps in Vim]]></title><description><![CDATA[<p>If you have a string, for example </p>&#13;
<blockquote>&#13;
<p>"The quick brown fox", "The quicker brown fox".</p>&#13;
</blockquote>&#13;
<p>If you try and use the VIM substitute regexp <code>s/"The quick brown.*"//</code> you'll end up nuking the whole line up to the terminating period mark. This is because the regular expression is acting greedily, and matching up to the second and final " character.</p>&#13;
<p>In PERL compatible regexps you would simply modify the expanding part of the regexp to be <code>.*?</code> which makes the expression match as little as possible.</p>&#13;
<p>In VIM the syntax is slightly different. To do a non-greedy expansion, you use <code>.\{-}</code>. For example:</p>&#13;
<pre><code>s/"The quick brown.\{-}"//</code></pre>&#13;
<p>This will leave you with </p>&#13;
<blockquote>&#13;
<p>, "The quicker brown fox".</p>&#13;
</blockquote> ]]></description><link>http://aaronbonner.io/post/5572943816/regexps-in-vim</link><guid isPermaLink="true">http://aaronbonner.io/post/5572943816/regexps-in-vim</guid><category><![CDATA[vim]]></category><category><![CDATA[editors]]></category><category><![CDATA[regex]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 17 May 2011 10:26:00 GMT</pubDate></item><item><title><![CDATA[Git Notes]]></title><description><![CDATA[<h4>Create a new repository</h4>&#13;
<pre><code>$ git init &lt;dirname&gt;&#13;
</code></pre>&#13;
<p>Creates a new bare git repository in the current directory, or if a directory argument is given, in that directory.</p>&#13;
<h4>Version control a file</h4>&#13;
<pre><code>$ git add file|dir &lt;file|dir&gt; ...&#13;
</code></pre>&#13;
<p>Adds, or to use git terminology ‘stages’ a file for a local commit.</p>&#13;
<h4>Remove a version controlled file</h4>&#13;
<pre><code>$ git rm -rf --cached &lt;file|dir&gt;&#13;
</code></pre>&#13;
<p>This removes a file that has been previously staged (i.e added) for local commit. It does not remove the local file. If you omit the —cached option, the local file <em>WILL</em> be deleted. The rm command is roughly analagous to:     $ svn del</p>&#13;
<h4>Make a local commit</h4>&#13;
<pre><code>$ git commit &lt;-a&gt; -m 'Commit message' &lt;file&gt; ...&#13;
</code></pre>&#13;
<p>The -a flag will commit all pending files recursively in the current path, alternatively you can specify the exact files to commit at the end of the command separated by spaces.</p>&#13;
<h4>See pending actions or status</h4>&#13;
<pre><code>$ git status &lt;-u no|all|normal&gt; &lt;path&gt; ...&#13;
</code></pre>&#13;
<p>Will show pending files and untracked files in the current working path. Optionally the -u (—untracked-files) option can be supplied to hide/show untracked files in the status report. Passing a path, or number of paths will run the status for that path rather than the current directory.</p>&#13;
<h4>Create a bare centralised remote repository</h4>&#13;
<pre><code>$ git clone --bare pathtoinitialrepo &lt;myrepo.git&gt;&#13;
</code></pre>&#13;
<p>Alternatively if you don’t have an existing repo already:</p>&#13;
<pre><code>$ git init --bare &lt;path&gt;&#13;
</code></pre>&#13;
<p>Often you’ll see the .git extension bandied about, particularly on github. This is a canonical reference to a bare git repository (i.e. only has the meta information and not a working copy) that is usually acts as a master or central repository.</p>&#13;
<h4>Configuring a local repository to push to a remote</h4>&#13;
<pre><code>$ cd pathtoinitialrepo&#13;
$ git remote add origin ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
$ git push origin master&#13;
</code></pre>&#13;
<p>If you initially clone from the remote repository you can skip the first step of setting up the origin. However if you have just created a new bare remote repository and want to setup an initial commit, you need to setup the origin first. Subsequent pushes to the remote repository can be done with git push origin master. This is the familiar model of subversion and svn commit.</p>&#13;
<h4>Create a local copy of a remote git repository</h4>&#13;
<pre><code>$ git clone ssh://myuser@somehost.com/~/repos/myrepo.git&#13;
</code></pre>&#13;
<p>Create a local copy from remote repository created following the above approach.</p>&#13;
<h4>Ignore changes to a tracked file</h4>&#13;
<pre><code>$ git update-index --assume-unchanged &lt;file&gt;&#13;
</code></pre>&#13;
<p>Use this command if you’ve added a file, e.g. a database defaults config file, but do not want further changes to be picked up for it.</p>&#13;
<p>To resume tracking changes to the file use:</p>&#13;
<pre><code>$ git update-index --no-assume-unchanged &lt;file&gt;&#13;
</code></pre> ]]></description><link>http://aaronbonner.io/post/5525556420/git-notes</link><guid isPermaLink="true">http://aaronbonner.io/post/5525556420/git-notes</guid><category><![CDATA[git]]></category><category><![CDATA[scm]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 22:47:00 GMT</pubDate></item><item><title><![CDATA[Detailed Searching for Rubygems]]></title><description><![CDATA[<p>I'm currently investigating how best to integrate tumblr with vim and markdown, and looking to ruby to provide some help.</p>&#13;
<p>I'm still very new to Ruby and its tooling so I was finding it hard to see the details of available remote gems.</p>&#13;
<p>I was labouring:</p>&#13;
<pre>$ gem search --remote tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
matenia-tumblr-api (0.1.6)&#13;
ruby-tumblr (0.0.2)&#13;
skinnycms_tumblr (0.0.2)&#13;
tumblr (2.2.0)&#13;
tumblr-api (0.1.4)&#13;
tumblr-rb (1.3.0)&#13;
tumblr4r (0.7.2)&#13;
tumblr_cleanr (0.0.1)&#13;
</pre>&#13;
<p>Then for each interesting result: </p>&#13;
<pre>$ gem specification --remote &lt;gem&gt;</pre>&#13;
<p>This brought back a lof interesting information but it is hardly an efficient way to browse. Digging into the docs I found an easier way.</p>&#13;
<pre>$ gem search -r -d tumblr</pre>&#13;
<pre>*** REMOTE GEMS ***&#13;
integrity-tumblr (0.1.1)&#13;
    Author: Mat&amp;#237;as Flores&#13;
    Rubyforge: http://rubyforge.org/projects/integrity&#13;
    Homepage: http://github.com/matflores/integrity-tumblr&#13;
    Tumblr notifier for the Integrity continuous integration server&#13;
 &#13;
matenia-tumblr-api (0.1.6)&#13;
    Author: Jeff Kreeftmeijer, Matenia Rossides&#13;
    Homepage: http://github.com/matenia/tumblr&#13;
    Tumblr API wrapper - maintained by matenia</pre>&#13;
<p>The -r is the short notation for --remote, and -d is the short notation for --details. The default for the search action is --no-details, which is why the initial resultset was so unhelpful.</p> ]]></description><link>http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</link><guid isPermaLink="true">http://aaronbonner.io/post/5506650152/detailed-searching-for-rubygems</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 15 May 2011 10:49:00 GMT</pubDate></item><item><title><![CDATA[Fix Vmware breaking Linux guest keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</link><guid isPermaLink="true">http://aaronbonner.io/post/5255250025/fix-vmware-breaking-linux-guest-keyboard-shortcuts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 06 May 2011 23:06:54 GMT</pubDate></item><item><title><![CDATA[PEAR acting up under Natty Ubuntu 11.04 ]]></title><description><![CDATA[<p>I've had PEAR act up on 3 different Natty installs, generally it's shirtiness manifests itself when you go</p>&#13;
<pre>$ sudo pear upgrade-all&#13;
&gt; PHP Fatal error:  Call to undefined method PEAR::raiseErro() in /usr/share/php/PEAR/REST.php on line 165&#13;
PHP Stack trace:&#13;
...</pre>&#13;
<p>You can fix this nuisance:</p>&#13;
<pre>$ sudo mkdir -p /tmp/pear/cache&#13;
$ sudo pear upgrade-all</pre>&#13;
<p>Hope this helps save someone some time!</p> ]]></description><link>http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</link><guid isPermaLink="true">http://aaronbonner.io/post/5171852503/pear-acting-up-under-natty-ubuntu-1104</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 03 May 2011 22:35:09 GMT</pubDate></item><item><title><![CDATA[Easy Access Rails Documentation ]]></title><description><![CDATA[<p>Just learned you can go</p>&#13;
<pre>$ gem server</pre>&#13;
<p>Open your browser and go to 127.0.0.1:8808 and read all the api docs for your installed gems (i.e Rails and friends).</p>&#13;
<p>That, is very cool!</p> ]]></description><link>http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</link><guid isPermaLink="true">http://aaronbonner.io/post/5144210487/easy-access-rails-documentation</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 May 2011 23:05:13 GMT</pubDate></item><item><title><![CDATA[Getting Started with Jenkins for PHP]]></title><description><![CDATA[<p>PHP projects are becoming ever larger and with that size comes complexity that can be difficult to manage.</p>
<p>Typically a PHP project will start off small, some basic webpage views, maybe a few forms, and likely, some sort of search functionality. This is pretty basic and if things need to change, you can normally change it in place, directly on the web-server and without too much grief.</p>
<p>At some point though entropy takes its toll and incremental changes have so may unintended side-effects it&#39;s no longer feasible to safely make edits in-place. A quick hack directly on the web-server stops being a &#39;quick win&#39;, and more like a game of Russian Roulette.</p>
<p>This is where unit and integration testing comes to the fore. A safety net to protect us when we start changing an application. But often getting a test environment set-up and representative of the live system is a lot of work in itself, and the temptation to just give into the &#39;Inner Pig&#39; is too great. That is, to not bother running any tests, and just cross fingers.</p>
<p><a href="http://jenkins-ci.org/">Jenkins</a> makes this a lot easier by doing much of the heavy lifting of building and running tests. It will check that code complies with defined style conventions. It can also check for common coding smells (copy/paste, duplication, long methods, large classes, poor expressions) and it can run custom scripts depending on the success of unit and integration tests.</p>
<p>Essentially Jenkins ensures that any changes that go into source control, do not &#39;break the build&#39; in a process known as <a href="http://c2.com/cgi/wiki/wiki?ContinuousIntegration">Continuous Integration</a>.</p>
<p>Installation is very easy. <a href="http://www.jenkins-ci.org">Visit the Jenkin&#39;s site</a>, follow the instructions for your platform and you&#39;ll have a Jenkins&#39;s install running (by default) at <a href="http://localhost:8080">http://localhost:8080</a>. I have also written a small installation tutorial: <a href="http://aaronbonner.tumblr.com/post/8339092868/installing-jenkins-on-ubuntu">Installing Jenkins on Ubuntu/Debian Systems</a>.</p>
<p>To get Jenkins initially dressed up for PHP follow the <a href="http://jenkins-php.org/">jenkins-php.org</a> site upto and including the setup the Pear packages and Jenkins plugins.</p>
<p>You may or may not run into trouble with these instructions. For the most part it worked fine for me, however I chose to use the cli-tool and the plugin repository was not initialised. To get up and running, I had to force the plugin list to refresh manually.</p>
<p><img src="/images/tumblr_lka2y314xF1qac7a1.png" alt="Jenkins plugin list"></p>
<p>Manage Jenkins &gt; Manage Plugins &gt; Click the Advanced Tab</p>
<p>If you&#39;re like me and want git or svn scm access, you&#39;ll want to install these plugins as well, as they are not included in the list on the jenkins-php instruction page.</p>
<p>Once the PEAR packages and Jenkins Plugins are installed, you&#39;re now ready to start preparing your application for Continuous Integration.</p>
<p>The initial configuration can be quite terse, as you will need an initial ant build file, and sample configurations for PHP Code Sniffer and the other code analysis tools. Thankfully <a href="http://twitter.com/#!/s_bergmann">Sebastian Bergmann</a> - author of PHPUnit and much of the Jenkins PHP suite of tools - has developed a project wizard utility to simplify these initial configuration steps.</p>
<p>Install the PHP Project Wizard from the PHPUnit channel:</p>
<pre><code>$ sudo pear install phpunit/ppw
</code></pre>
<p>Once installed, you can change into your project dir and run it with a few arguments to setup your initial build environment.</p>
<pre><code>$ ppw --name &#39;My Project&#39; --source ./lib --tests ./tests
</code></pre>
<p>You can also specify arguments defining default rulesets for PHP Code Sniffer and PHP Mess Detector. Omitting these arguments sees ppw select some sane defaults for you.</p>
<p>Now install the Jenkins PHP Job Template:</p>
<pre><code>$ cd path/to/jenkins/jobs
$ git clone git://github.com/sebastianbergmann/php-jenkins-template.git php-template
$ chown -R jenkins:nogroup php-template/
$ curl http://localhost:8080/reload # set this to be the path:port to your jenkins server
</code></pre>
<p>Jenkins and your system are now ready to manage a PHP Project. Please see my tutorial on <a href="http://aaronbonner.tumblr.com/post/8380014964/creating-a-php-project-in-jenkins">Setting up a PHP Project in Jenkins</a> for how to setup your first project in Jenkins.</p>
]]></description><link>http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/4965561040/getting-started-with-jenkins-for-php</guid><category><![CDATA[build]]></category><category><![CDATA[continuous integration]]></category><category><![CDATA[deploy]]></category><category><![CDATA[jenkins]]></category><category><![CDATA[php]]></category><category><![CDATA[phpunit]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 21:28:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby - ARGV a Potential False Friend]]></title><description><![CDATA[<p>Sometimes in a foreign language, you will see a word that has some meaning to you in your native language. Often, these will be cognates and have the same meaning in the foreign language, such as 'green' and 'grün' in German. Sometimes it can have a completely different meaning, such as 'bad', (which means bath in German). These are commonly called false friends.</p>&#13;
<p>It's similar with programming languages, some familiar constructs will work exactly the same as you expect. Some wont work at all, and in some cases, they'll sort of work. </p>&#13;
<p>In Ruby, if you're like me and come from a C, Unix or PHP background, one of the first things to trip you up will be ARGV.</p>&#13;
<p>In PHP ARGV works like this:</p>&#13;
<pre>&lt;?php&#13;
// test.php&#13;
echo $_SERVER['argv'][0] ?&gt; &#13;
$ php test.php 1234 &#13;
&gt; test.php&#13;
</pre>&#13;
<p>C and BASH work the same way.</p>&#13;
<p>Ruby does things a different (PERL) way:</p>&#13;
<pre>#!/usr/bin/env ruby &#13;
# file test.rb &#13;
puts ARGV[0] &#13;
$ ruby test.rb helloworld &#13;
&gt; helloworld &#13;
</pre>&#13;
<p>The difference is, in PHP, C, BASH the first element of ARGV is the program's name. In Ruby, and in PERL, it is the first argument passed into the program. </p>&#13;
<p>I'm trying to think which makes more sense, probably the Ruby/PERL implementation. I'm used to starting at index 1 though. So just bear in mind that even though constructs may be similar between languages, there is some devil in the detail.</p> ]]></description><link>http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</link><guid isPermaLink="true">http://aaronbonner.io/post/4963953323/learning-ruby-argv-a-potential-false-friend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 26 Apr 2011 20:28:00 GMT</pubDate></item><item><title><![CDATA[Comprehensive list of readline/bash keyboard shortcuts]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</link><guid isPermaLink="true">http://aaronbonner.io/post/4902587017/comprehensive-list-of-readlinebash-keyboard</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 24 Apr 2011 18:58:20 GMT</pubDate></item><item><title><![CDATA[Uninstalling a Feature or Plugin with Eclipse 3.6 (Helios)]]></title><description><![CDATA[<p>Uninstalling software is not the most straight forward thing to do in the 3.6 release of Eclipse.</p>&#13;
<p>Actually, it is a little bit like clicking 'start' to shutdown oin windows. To uninstall a plugin / feature you need to go to the install new software screen. On a Mac it's found by highlighting the help menu and selecting "Install New Software".</p>&#13;
<p><img src="/images/tumblr_lf6z2dlfOd1qac7a1.png" /></p>&#13;
<p>In the resulting window, there's a link which lets you view already installed software.</p>&#13;
<p><img src="/images/tumblr_lf6z3sTMio1qac7a1.png" /></p>&#13;
<p>Clicking that link brings you to the installation details page which (assuming there are no dependency issues) lets you uninstall unwanted features.</p>&#13;
<p><img src="/images/tumblr_lf6z6ksgHN1qac7a1.png" /></p>&#13;
<p>Uninstall the unwanted software and restart Eclipse and you're good to go.</p> ]]></description><link>http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</link><guid isPermaLink="true">http://aaronbonner.io/post/2802597551/uninstalling-a-feature-or-plugin-with-eclipse-36</guid><category><![CDATA[eclipse]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:18:55 GMT</pubDate></item><item><title><![CDATA[Snow Leopard, Ruby on Rails and Mysql Setup]]></title><description><![CDATA[<p>If you're running OSX Snow Leopard and Macports MySQL, you might run into some drama trying to get Ruby and Mysql playing nicely together.</p>&#13;
<p>This can manifest itself in a number of ways, but the most common, I think, is what happened with me</p>&#13;
<p><code> aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) Couldn't create database for {"reconnect"=&gt;false, "encoding"=&gt;"utf8", "username"=&gt;"dbuser", "adapter"=&gt;"mysql", "database"=&gt;"testapp_development", "host"=&gt;"127.0.0.1", "pool"=&gt;5, "password"=&gt;"testpwd"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation)</code></p>&#13;
<p>Seems the mysql driver gets confused a little bit as when you install the MySQL Rubygem as directed by Rake, you link against the bundled OSX MySQL and not the Macports one.</p>&#13;
<p>The solution is to install the mysql gem as follows (uninstalling it first, if necessary)</p>&#13;
<p><code>sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/opt/local/bin/mysql_config5</code></p>&#13;
<p>Substitute the mysql_config5 path with your own macports mysql_config path.  Afterwards, everything should work fine.</p>&#13;
<p><code>aaron ~/Development/ruby/testapp $ rake db:create (in /Users/aaron/Development/ruby/testapp) </code></p> ]]></description><link>http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</link><guid isPermaLink="true">http://aaronbonner.io/post/2802306300/snow-leopard-ruby-on-rails-and-mysql-setup</guid><category><![CDATA[ruby]]></category><category><![CDATA[rails]]></category><category><![CDATA[osx]]></category><category><![CDATA[mac]]></category><category><![CDATA[ruby on rails]]></category><category><![CDATA[snow leopard]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 18 Jan 2011 00:00:00 GMT</pubDate></item><item><title><![CDATA[Zend_Search_Lucene Indexing Fun]]></title><description><![CDATA[<p>When working with a lucene index using the <a title="Zend Framework" href="http://framework.zend.com">Zend Framework's</a> <a title="Zend_Search_Lucene" href="http://framework.zend.com/manual/en/zend.search.lucene.html">lucene search component</a> you'll often in the course of the index's lifecycle want to update documents.  This can prove tricky with the current implementation as there is no insitu update feature, you must first delete the old document and add a new one. The tricky part is locating the unique document you want to update. The 'old way' was as following:</p>&#13;
<pre>// Retrieving documents with find() method using a query string&#13;
$query = $idFieldName . ':' . $docId;&#13;
$hits  = $index-&gt;find($query);&#13;
foreach ($hits as $hit) {&#13;
    $title = $hit-&gt;title;&#13;
    $contents = $hit-&gt;contents;&#13;
}</pre>&#13;
<p>This proves _painfully_ slow, you're loading the full index in an attempt to find a unique document with an ID. Even worse is if your unique ID happens to be a string such as a url or path.  Since ZF 1.5, the 'best practice' direction to perform this type of task is to use the Zend_Search_Lucene::termDocs() method:</p>&#13;
<pre>$term = new Zend_Search_Lucene_Index_Term('/somepath/somewhere', 'path');&#13;
$docIds = $index-&gt;termDocs($term);&#13;
foreach ($docIds as $id) {&#13;
    $doc = $index-&gt;getDocument($id);&#13;
    $title = $doc-&gt;title;&#13;
    $contents = $doc-&gt;contents;&#13;
}</pre>&#13;
<p>Performance wise this proves much more efficient. However, unless you're careful at the indexing stage you may run into trouble when running termDocs() on a string value such as a URL or path as opposed to an integer ID. This is down to the field being added tokenized. This is the most common way fields are added and corresponds to:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Text('title', $title));</pre>&#13;
<p>If you want to use termDocs on an identifying field you need to add the field as type Keyword:</p>&#13;
<pre>$doc = new Zend_Search_Lucene_Document();&#13;
$doc-&gt;addField(Zend_Search_Lucene_Field::Keyword('http://a.com/uri', 'uri'));</pre>&#13;
<p>Keyword fields are not tokenized, and a term vector (which termDocs() requires) <strong>is</strong> stored, the distinction between the two field types is documented in Zend_Search_Lucene_Field's phpdocs:</p>&#13;
<blockquote>Zend_Search_Lucene_Field::Text() constructs a String-valued Field that is tokenized and indexed, and is stored in the index, for return with hits. Useful for short text fields, like "title" or "subject". Term vector will not be stored for this field.</blockquote>&#13;
<p>In contrast see:</p>&#13;
<blockquote>Zend_Seach_Lucene_Field::Keyword() constructs a String-valued Field that is not tokenized, but is indexed and stored. Useful for non-text fields, e.g. date or url.</blockquote>&#13;
<p>This caught me out a little bit until I dug around the source a little bit looking to see where termDocs was going wrong. Hopefully this helps save someone else some time, and hopefully Zend can update their documentation to draw other developers' attention to this quirk.</p> ]]></description><link>http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</link><guid isPermaLink="true">http://aaronbonner.io/post/2767987891/zendsearchlucene-indexing-fun</guid><category><![CDATA[zend framework]]></category><category><![CDATA[lucence]]></category><category><![CDATA[search]]></category><category><![CDATA[php]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:24:00 GMT</pubDate></item><item><title><![CDATA[PHP 5.x Reflection]]></title><description><![CDATA[<p>One of the nicer, and largely unheralded I think, features of PHP 5 is its comprehensive reflection API.  Arguably one of the reasons it is largely unheralded is because its <a title="PHP 5 Reflection Documentation" href="http://uk3.php.net/manual/en/language.oop5.reflection.php">documentation</a> is a bit average. One great little tidbit, and the motivation for the blog post, is the ReflectionClass-&gt;getMethods($filter=null) method takes an optional parameter 'filter'. In the online documentation there is scant mention of what values this $filter parameter can take. Luckily in the comments Will Mason <a title="Will Mason reflection filter properties" href="http://uk3.php.net/manual/en/language.oop5.reflection.php#76884">chimed</a> in with paydirt:</p>&#13;
<blockquote>If you are looking for the long $filters for ReflectionClass::getMethods(), here they are. They took me a  long time to find. Found nothing in the docs, nor google. But of course, Reflection itself was the final solution, in the form of ReflectionExtension::export("Reflection").</blockquote>&#13;
<pre>// The missing long $filter values!!!&#13;
ReflectionMethod::IS_STATIC;&#13;
ReflectionMethod::IS_PUBLIC;&#13;
ReflectionMethod::IS_PROTECTED;&#13;
ReflectionMethod::IS_PRIVATE;&#13;
ReflectionMethod::IS_ABSTRACT;&#13;
ReflectionMethod::IS_FINAL;&#13;
&#13;
// Use them like this&#13;
$r = new ReflectionClass("MyClass");&#13;
// Print all public methods&#13;
foreach ($r-&gt;getMethods(ReflectionMethod::IS_PUBLIC) as $m) {&#13;
    echo $m-&gt;__toString();&#13;
}&#13;
</pre>&#13;
<p>Another example, this time one of my own, is one that I found myself writing while working with the Zend Framework's Zend_Controller implementation:</p>&#13;
<pre>/**&#13;
 * @param   String $controller_class&#13;
 * @return   ArrayObject&#13;
 */&#13;
private function getActionList($controller_class)&#13;
{&#13;
    $reflection_class = new ReflectionClass($controller_class);&#13;
    $methods = $reflection_class-&gt;getMethods(ReflectionProperty::IS_PUBLIC);&#13;
    return new ArrayObject($methods);&#13;
}&#13;
</pre>&#13;
<p>Like many platforms, in PHP it seems documentation is no replacement for digging around the source itself.</p> ]]></description><link>http://aaronbonner.io/post/2767962296/php-5x-reflection</link><guid isPermaLink="true">http://aaronbonner.io/post/2767962296/php-5x-reflection</guid><category><![CDATA[php]]></category><category><![CDATA[reflection]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:22:00 GMT</pubDate></item><item><title><![CDATA[GOF Design Pattern Card]]></title><description><![CDATA[<p>Came across <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">this</a> brilliant GOF pattern cheatsheet/cheatcard on DZone.  Essentially it's the GOF creational, behavioural and structural patterns distilled down in a sort of periodic table format and in suitable format for printing to A3.  Like most developers whenever starting a new unit of code I find myself heading to the c2 wiki, or if I am at home, flicking through to the back pages of the GOF book or if in a pinch, hitting Vince Huston's <a href="http://www.vincehuston.org/dp/">The Sacred Elements of the Faith</a> site. This chart helps speed things up a bit.  Check out: <a href="http://www.dzone.com/links/design_patterns_quick_reference.html">http://www.dzone.com/links/design_patterns_quick_reference.html</a> Or download the card directly: <a href="http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf">http://www.mcdonaldland.info/files/designpatterns/designpatternscard.pdf </a></p> ]]></description><link>http://aaronbonner.io/post/2767947020/gof-design-pattern-card</link><guid isPermaLink="true">http://aaronbonner.io/post/2767947020/gof-design-pattern-card</guid><category><![CDATA[gof]]></category><category><![CDATA[patterns]]></category><category><![CDATA[software design]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:21:00 GMT</pubDate></item><item><title><![CDATA[Jury Rigging VBulletin for UTF-8]]></title><description><![CDATA[<p>UTF-8 internalisation support is pretty big deal if you want to localise your forums for non-native English speakers. Thankfully though it isn't a complete trial to retro fit VBulletin with basic UTF-8 support. <em>This guide assumes a remotely recent version of VBulletin (I was using 3.6.8 at time of writing) and that you are using MySQL 5.x.</em></p>&#13;
<p>Firstly you need to backup your forum database, the simplest way is to make use of the mysqldump utility.</p>&#13;
<p><code>mysqldump -uusername -ppassword forum_db_name &gt; forum_db_backup.sql</code></p>&#13;
<p>Copy this output somewhere safe just incase of disaster. You can always revert to what you had by reimporting from this dumpfile. By default VBulletin applies the 'latin1' charset to its schema. We want to replace all instances of this in the dumpfile with 'utf8'.</p>&#13;
<p><code>sed -i 's/latin1/utf8/g' forum_db_backup.sql</code></p>&#13;
<p>Now it's time to take your forum offline. Navigate to: AdminCP -&gt; VBulletin Options - &gt; Turn your VBulletin on/off and change the forum active option to off. When this is done, just for good measure, 'chmod 000' the forum root directory.</p>&#13;
<p>Drop the old VBulletin forum database and create a new, utf8 friendly, one.</p>&#13;
<p><code>mysql --user=username --password=password --execute="DROP DATABASE forum_db_name; CREATE DATABASE forum_db_name CHARACTER SET utf8 COLLATE utf8_general_ci;"</code></p>&#13;
<p>It's time to convert the old database dump to use the UTF-8 character set. My favourite utility to do this is <a href="http://www.gnu.org/software/libiconv/">iconv</a>. This util can easily be installed using yum, apt, emerge, synaptic or whatever package manager your distro relies on</p>&#13;
<p><code>iconv -f latin1 -t utf-8 forum_db_backup.sql &gt; forum_db_backup-utf8.sql</code></p>&#13;
<p>Pump the fixed SQL back into your database.</p>&#13;
<p><code>mysql -uusername -ppassword forum_db_name &lt; forum_db_backup-utf8.sql</code> <code>iconv -f iso8859-1 -t utf-8 vbulletin-language.xml &gt; vbulletin-language-utf8.xml</code></p>&#13;
<p>Then import the language in the usual way AdminCP -&gt; Languages &amp; phrases -&gt; Download/upload languages.</p> ]]></description><link>http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</link><guid isPermaLink="true">http://aaronbonner.io/post/2767907856/jury-rigging-vbulletin-for-utf-8</guid><category><![CDATA[vbulletin]]></category><category><![CDATA[utf-8]]></category><category><![CDATA[il8n]]></category><category><![CDATA[internationalization]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:18:00 GMT</pubDate></item><item><title><![CDATA[10 things to avoid with your CV]]></title><description><![CDATA[<p>While there is scant sign of the current general economic malaise breaking, Developers, and Web Developers in particular are enjoying strong demand for their skills. So with good opportunities available for talented developers it's worth making the effort to differentiate yourself and take advantage of the favourable job market. Perhaps the most important tool at your disposal, is your CV.</p>&#13;
<p>It's good to think of your CV as like a tool, no different from a programming language or an IDE. Like any tool it has to be rigorously tried, tested and possible refactored if it is to be successful and help you achieve your goals. In the case of a CV its purpose is two-fold, a) to get yourself past a recruiter and onto the actual client and b) to impress the client sufficiently that they want to interview you.  So in the spirit of the new year and everyone having 'ten things' lists, here's mine for what to avoid in a modern CV for an IT role:</p>&#13;
<ol><li>Don't send an old CV. If you haven't updated your CV in months,  let alone years, it isn't going to sell you as effectively as it can. As a developer or even a recent graduate you're learning all the time. Detail and demonstrate as much of that learning (that is pertinent to the job at hand) as possible. As you get older, more experienced, your perception of what is and isn't important improves. While recently reviewing my own CV I found I had added a significant amount of detail, while actually reducing the overall bulk of the document.</li>&#13;
<li>Please, do not include an interests section. This point is somewhat subjective I grant. But of the CVs we've been forwarded all but one had an interests section. Not one of these contained anything relevant to the job being applied for, and in some cases actually contributed to us ruling the candidate out. When hiring, a client is seeking a professional. Telling them you spend a lot of time playing 'World of Warcraft' in your spare time isn't going to get them jumping out of their skin to interview you - particularly if you don't qualify how that interest is relevant to you performing the duties required.</li>&#13;
<li>Avoid sending a generic CV. Applying for jobs is tedious, time consuming and not immediately rewarding. But sending a generic CV detailing every subject you did at GCSE and A levels doesn't tell the client why they should hire you, and you specifically, for the job at hand. Oh and most clients seeking developers don't care if you worked at Subway or have a Tesco certification in fish mongering, it's simply not relevant.</li>&#13;
<li>Web 2.0 extravagance might be 'in' right now, but try to keep your CV conservative, well spaced and using a minimum of fonts. Giant mastheads, fancy bullets, a giant mess of fonts, it isn't impressing anyone, seriously, and most recruiters gut your CV into a standardised format anyway. Do be mindful of how it is to read. Clean and clear is the key, but save the extra special effort for the content.</li>&#13;
<li>Don't go on for longer than two pages. Trying to cheat this rule by making the font so small that a microfiche reader is required is only going to annoy the recruiter. That means your CV gets forwarded to the 'round file' not the client's inbox.</li>&#13;
<li>Don't list every course you did at University and expect the client to care. Of our graduate applicants not one has detailed what the courses actually contained. Database Systems 1. Cool - and you did what exactly in that course that is relevant to the job? It's often difficult for graduates to pad out their CV in the same way as someone with a few years' experience. Treat your degree like a job, work out what specific skills you got from your courses and detail the skill qualified with a practical example of how you applied it in your course work. Start doing that for all of your courses and very quickly you'll bang into the two page rule. Keep going, but when you're finished pare down the text into bullet points that are directly relevant to the job spec.</li>&#13;
<li>Spelling mistakes! Simply do not forget to proof read your CV. This really ought to go without saying. Get your sister, mother, girlfriend, house mate, anyone, to read your CV before sending it. Read it backwards, turn the zoom up to 400% and read it one word at a time. Anything to ensure that what you send out is as professional as you are. A sloppy CV leads to a sloppy impression. 'The Halo Effect', while not always fair, is something you have to deal within the real world. If a client's first impression of you is a bad one, all subsequent impressions will be shaded by it.</li>&#13;
<li>Don't include a picture of yourself. Like ever. It's simply not necessary.</li>&#13;
<li>Don't ramble, don't generalise, don't make statements without qualification. Good use of English is one of the best indicators of a good developer. Good code is succinct, clear and well qualified (unit tested). Like PHP, Java, indeed any programming language, English has a grammar, a vocabulary and patterns of use. Make best use of these attributes in your CV. Constantly refactor your CV to weed out redundancies,  group like concepts, simplify and support the concepts. Like good code, a good CV goes through many iterations until it's 'good enough' for the real world. Mercilessly refactor your CV as you would your program code.</li>&#13;
<li>Don't accept everything you read on the net about CVs as gospel. This caveat applies to everything in life and not least this blog itself. If you listen to every top ten list of what not to include in your CV you'll quickly find out there's absolutely nothing you should put in your CV. Critically consider what you read suggested about what a good CV looks like and make your own mind up based upon the supporting arguments made and your own CV's feedback. For example if you disagree about point two and decide to include an 'interests section' ask recruiters when they call you what they think about it, did it provide value or was it noise? If you're getting interviews ask the recruiters what in your CV is standing out. If you're not, then ask what, if any, feedback there is from the client - and react to it.</li>&#13;
</ol><p>I really do think of a CV like program code or a developer's utility. As a good developer you always need to evaluate your tools to ensure they are providing the maximum value. And like program code there is always an element of entropy to catch and counter over time. A CV, like program code, can sometimes be brittle and break with additions. Using and maintaining one is a matter of constantly reviewing, refactoring and responding to user feedback.  For some good resources on CV writing I really recommend the following links:  <a href="http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp">http://www.harveynash.com/uk/hnit/jobs/articles_cv_basics.asp</a> <a href="http://www.lifeclever.com/give-your-resume-a-face-lift/">http://www.lifeclever.com/give-your-resume-a-face-lift/</a> And for graduates especially:  <a href="http://www.kent.ac.uk/careers/cv.htm">http://www.kent.ac.uk/careers/cv.htm</a></p> ]]></description><link>http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</link><guid isPermaLink="true">http://aaronbonner.io/post/2767756064/10-things-to-avoid-with-your-cv</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sun, 16 Jan 2011 00:08:00 GMT</pubDate></item><item><title><![CDATA[Learning Ruby and the Constant Conundrum]]></title><description><![CDATA[<p>Now after the hype surrounding the language has died down somewhat, I'm getting acquainted with Ruby and learning the ropes.</p>&#13;
<p>To guide me through the Ruby syntactic jungle is O'Reilly's 'Learning Ruby' book. I checked out the ubiquitous Pragmatic Programmer's 'Programming Ruby', but the third edition seems more suited to stopping doors than teaching Ruby.</p>&#13;
<p>Reading through the first few chapters of 'Learning Ruby' has left me a little nostalgic of the early 90s when I was going through my first programming book 'Learning Perl'. Maybe it's the title similarities but mostly it's that Ruby's syntax feels as slippery to me as Perl's did.</p>&#13;
<p>I don't mean that in an endearing way.</p>&#13;
<p>While I'm in love with blocks (or closures whatever you prefer) and Procs and Mixins (Traits), a simple thing scares me. Constants.</p>&#13;
<p>Constant, the word is pretty clear in its meaning. It is a value which remains stable over time. Wikipedia says: 'a constant is a special kind of variable whose value cannot typically be altered by the program during its execution'. Collins dictionary says: ‘something that does not or cannot change or vary’.</p>&#13;
<p>Ruby though, says no. Constants are just global variables declared capitalised. Oh sorry that should be capitalized. Ruby doesn’t speak English, only American. A constant in Ruby is about as constant as your discipline. If you <strong>REALLY </strong><span>want to change its value, Ruby wont stand in your way, or even make it that hard to do.</span></p>&#13;
<p>That really, really concerns me. Part of the appeal of Ruby, I guess, is that it lets you do things you want, your way. Maybe it’s ‘<a title="Fear" href="http://c2.com/cgi/wiki?FearOfTheUnknown">Fear</a>’ but I really don’t want to have to worry about my constants changing. Ruby makes a big deal of its ‘duck typing’ You know, if it quacks like a duck, it is a duck. Unfortunately in Ruby a constant quacks like a duck but it bites like a hippo.</p> ]]></description><link>http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</link><guid isPermaLink="true">http://aaronbonner.io/post/2746328561/learning-ruby-and-the-constant-conundrum</guid><category><![CDATA[ruby]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 14 Jan 2011 17:35:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Hopefully this saves s'one some time, when trying to get an oauth token from salesforce use the login server instance, not your regional one]]></description><link>http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471181/hopefully-this-saves-sone-some-time-when-trying</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Native OSX SOQL Tool! http://bit.ly/fBoKia]]></description><link>http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471127/native-osx-soql-tool-httpbitlyfbokia</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Another #salesforce timesaver, when making your oauth login request, ensure you DO NOT submit it a multipart form request.]]></description><link>http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</link><guid isPermaLink="true">http://aaronbonner.io/post/1691471161/another-salesforce-timesaver-when-making-your</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Nov 2010 15:24:12 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're working with salesforce, definitely checkout the force api explorer app, great to quickly test soql queries: http://bit.ly/hov5qo]]></description><link>http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</link><guid isPermaLink="true">http://aaronbonner.io/post/1679978011/if-youre-working-with-salesforce-definitely</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 25 Nov 2010 12:42:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Zend Framework life saver Zend_Db_Select::assemble() - outputs the objects current sql state.]]></description><link>http://aaronbonner.io/post/1652863113/zend-framework-life-saver</link><guid isPermaLink="true">http://aaronbonner.io/post/1652863113/zend-framework-life-saver</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Nov 2010 00:07:10 GMT</pubDate></item><item><title><![CDATA[Colour Trends]]></title><description><![CDATA[<p>If you're a design challenged developer like myself http://www.colourlovers.com/ will help you pick a cohesive colour palette for your application.</p> ]]></description><link>http://aaronbonner.io/post/1618905407/colour-trends</link><guid isPermaLink="true">http://aaronbonner.io/post/1618905407/colour-trends</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Nov 2010 14:39:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Some people using #VIM comes naturally. I.e. they have been using it for 25 years. For the rest of the world there is http://vimcasts.org/]]></description><link>http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</link><guid isPermaLink="true">http://aaronbonner.io/post/1590776168/some-people-using-vim-comes-naturally-ie-they</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 16 Nov 2010 12:25:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @oliklee: Just finished the #phpunit mock #cheatsheet for my workshop at #t3con10: http://bit.ly/dcru3c]]></description><link>http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</link><guid isPermaLink="true">http://aaronbonner.io/post/1215137593/rt-oliklee-just-finished-the-phpunit-mock</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 30 Sep 2010 11:20:23 GMT</pubDate></item><item><title><![CDATA[Getting SQL output from Zend_Db_Select]]></title><description><![CDATA[<p>Handy method Zend_Db_Select::assemble() will output the select object's state as sql. Very handy when debugging.</p> ]]></description><link>http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</link><guid isPermaLink="true">http://aaronbonner.io/post/1042670934/getting-sql-output-from-zenddbselect</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 31 Aug 2010 14:52:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Sometimes you don't have the unix util seq available (i.e. macosx). 'jot' is your friend, note the args are backtofront: seq 1 12 = jot 12 1]]></description><link>http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</link><guid isPermaLink="true">http://aaronbonner.io/post/927065479/sometimes-you-dont-have-the-unix-util-seq</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 09 Aug 2010 13:38:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @shiflett: I never knew the Google Cheat Sheet Existed. I see myself using the synonym operator often. http://j.mp/googcheat /via @gnat]]></description><link>http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</link><guid isPermaLink="true">http://aaronbonner.io/post/894409296/rt-shiflett-i-never-knew-the-google-cheat-sheet</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 02 Aug 2010 19:50:10 GMT</pubDate></item><item><title><![CDATA[PHP and C: The value of getting your hands dirty]]></title><description><![CDATA[<p>Many PHP programmers today fall into two camps: those that are self-taught and learned PHP to make a website, or those coming out of university Comp Sci courses heavily grounded in strictly typed OO (I.e Java).</p>&#13;
<p>This is, of course, not absolute, there are camps of reformed Perl sys-admin monkeys that haven’t been seduced by Python. And of course a rare breed of ex systems programmers versed in C and C++.</p>&#13;
<p>It is actually this last group of coders that tend to get on with PHP the best.</p>&#13;
<p>This makes sense, because PHP was originally designed to be a template engine to produce HTML with a core C backend doing the grunt work. This seems a ridiculous idea now, as CPU cycles are much cheaper than man hours. But in the late 90s native code ran circles around dynamic, interpreted alternatives.</p>&#13;
<p>These days C is a little bit like ASM was in the 90s. Being proficient in it is handy, but not essential. I would argue though, for PHP coders it is an important environment in which to be familiar.</p>&#13;
<p>Many of PHP's eccentricities can be traced to it’s C roots. The strange and inconsistent function parameter orders, right down to the sometimes peculiar way it handles memory and references.</p>&#13;
<p>The file and string functions marry up almost 1:1 with standard C. Experience of and understanding of how these low level functions work will immeasurably improve your application of their PHP counterparts.</p>&#13;
<p>Networking is another area where PHP and C are very similar. Indeed you can write a simple tcpclient in PHP, get rid of all the $ variable identifiers and have a recognisable C fragment.</p>&#13;
<p>The biggest benefit though of learning a bit of C or C++, is to get an “appreciation” of memory management. C is unique among the programming environments of today in that it does not manage memory for you. Arguably this is what makes C so painful to develop in. But similarly, a sometimes useful characteristic that makes the environment still relevant today.</p>&#13;
<p>By way of example, strings in C are represented by an array of chars, terminated by a NUL ‘\0'.</p>&#13;
<p>In C, you would declare a string like  this: char str[20];</p>&#13;
<p>This declares a string of 20 characters, and str itself points to an address in memory where 20 bytes have been reserved for itself. Now if we try to write 21 characters to this string, C wont autoexpand the array to fit, or do anything so helpful as to warn you that you’ve gone out of the bounds of your allocated storage. No, C will do what you tell it, and in this case it means overwriting someone else's data.</p>&#13;
<p>This is quite a powerful characteristic and as such requires a programmer to be responsible.</p>&#13;
<p>Generally if you have a 20 char string defined but want to store a 25 char string, you need to reallocate memory. You could declare a new array or do a concatenation operation. Either way, this costs CPU time.</p>&#13;
<p>So when in PHP you are spamming '.' concat operators everywhere, you can appreciate that PHP is doing a lot of memory re-allocation under the hood to provide that syntactic sugar.</p>&#13;
<p>As mentioned above, PHP was originally intended to be a purely templating language for C web applications. Which is where PHP modules / extensions come in. These originally were where your business logic was to go. PHP extensions are compiled to native code, and as such run FAST. You can then access these functions in PHP just like the other core functions and classes. Yahoo, Facebook, use PHP in this way.</p>&#13;
<p>So can you. A great way to dabble in a bit of C is to write your own PHP extension. The benefit of getting your hands dirty in this way is the nice speed boost for the functions in your extension. But the more important benefit is how it can greatly improve your understanding of what PHP does for you under the hood. This lets you make informed design choices when writing your regular PHP code.</p>&#13;
<p>As a starting point, I can thoroughly recommend starting out with the <a href="http://uk.php.net/manual/en/internals2.php">Zend engine hacker's guide</a> on the main PHP site. Becoming familar with the structure of the internals is the first step to appreciating all the of the magic PHP does for you under the hood.</p> ]]></description><link>http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</link><guid isPermaLink="true">http://aaronbonner.io/post/879262577/php-and-c-the-value-of-getting-your-hands-dirty</guid><category><![CDATA[C]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Jul 2010 10:50:00 GMT</pubDate></item><item><title><![CDATA[Little Endian vs Big Endian]]></title><description><![CDATA[<p>The concept is pretty straight forward, either the big end of a binary number (expressed as a series of bytes) is on the far left (big endian) or on the far right (little endian).</p>&#13;
<p>But how does this affect you as a developer, and what is the reasoning behind each different approach. This paper goes a long way to explaining this, rather fundamental, aspect of computer science.</p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</link><guid isPermaLink="true">http://aaronbonner.io/post/874937973/little-endian-vs-big-endian</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 29 Jul 2010 11:56:18 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Quick note: mount -o bind can be achieved on a mac with macfuse and the bindfs tool]]></description><link>http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</link><guid isPermaLink="true">http://aaronbonner.io/post/870789207/quick-note-mount-o-bind-can-be-achieved-on-a-mac</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 28 Jul 2010 13:53:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: RT @zvikico: New blog post: Eclipse 3.6 Hidden Treasures http://bit.ly/di4wlF]]></description><link>http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/861480707/rt-royganor-rt-zvikico-new-blog-post-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Jul 2010 14:19:08 GMT</pubDate></item><item><title><![CDATA[Using memory as a file in PHP 5.1+]]></title><description><![CDATA[<p>Some functions (fgetcsv, fputcsv for example) require a stream handle to work with. Similarly you have methods within zend_pdf to expect to read and write image data from a stream.</p>&#13;
<p>This can be inconvenient at times when you already have the data sitting in a variable. A way of getting around the need to worry about physically creating a file is to use the memory stream type.</p>&#13;
<p>PHP supports a number of input / output streams ranging from the usual stdin, stderr, stdout to memory, temp and filter.</p>&#13;
<p>See http://php.net/manual/en/wrappers.php.php for more information on these.</p>&#13;
<p>But looking at the memory type, it's very easy to use. Simply $fh = fopen('php://memory', 'wb+'); and you can use the usual file functions you would typically associate with an ondisk file.</p>&#13;
<p>You can fread, fwrite, file_get_contents on the memory stream or push it out over the network using the tcp streams PHP offers. PHP streams are a powerful and often underutilised aspect of the language.</p> ]]></description><link>http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</link><guid isPermaLink="true">http://aaronbonner.io/post/840054542/using-memory-as-a-file-in-php-51</guid><category><![CDATA[php]]></category><category><![CDATA[streams]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 21 Jul 2010 08:47:29 GMT</pubDate></item><item><title><![CDATA[Quick file sharing with python]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</link><guid isPermaLink="true">http://aaronbonner.io/post/832180085/quick-file-sharing-with-python</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 19 Jul 2010 14:39:09 GMT</pubDate></item><item><title><![CDATA[Understanding Big and Little Endian Byte Order | BetterExplained]]></title><link>http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</link><guid isPermaLink="true">http://aaronbonner.io/post/802655789/understanding-big-and-little-endian-byte-order</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 17:11:02 GMT</pubDate></item><item><title><![CDATA[Retrieving Product Attributes from Magento's V2 API]]></title><link>http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</link><guid isPermaLink="true">http://aaronbonner.io/post/801777949/retrieving-product-attributes-from-magentos-v2</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Jul 2010 11:53:09 GMT</pubDate></item><item><title><![CDATA[RFC 791 - Internet Protocol]]></title><link>http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</link><guid isPermaLink="true">http://aaronbonner.io/post/789333197/rfc-791-internet-protocol</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Beej's Guide to Network Programming]]></title><link>http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</link><guid isPermaLink="true">http://aaronbonner.io/post/789333191/beejs-guide-to-network-programming</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Jul 2010 12:39:26 GMT</pubDate></item><item><title><![CDATA[Programming Language Performance Shootout]]></title><description><![CDATA[<p>A flawed but interesting set of automated benchmarks of various programming languages against one-another.</p> ]]></description><link>http://aaronbonner.io/post/781206059/programming-language-performance-shootout</link><guid isPermaLink="true">http://aaronbonner.io/post/781206059/programming-language-performance-shootout</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 14:27:36 GMT</pubDate></item><item><title><![CDATA[Grouping by Date in MySQL]]></title><description><![CDATA[<p>A quick tip this one. Whenever you want to group a reporting query by date (by month, day, year) you can use the following approach in MySQL</p>&#13;
<p><code>SELECT DATE_FORMAT(yourdate, '%Y-%m') AS grouping_date COUNT(id) FROM yourtable GROUP BY grouping_date;</code></p>&#13;
<p>Change the date format to whatever date quantum you want to report on, i.e. just have '%Y' for a yearly grouping, or have '%y-%m-%d' for a daily one.</p> ]]></description><link>http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</link><guid isPermaLink="true">http://aaronbonner.io/post/780482559/grouping-by-date-in-mysql</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 07 Jul 2010 09:28:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy #osx tip: Cmd-Shift-4 lets you screenshot a selectable region of your desktop]]></description><link>http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</link><guid isPermaLink="true">http://aaronbonner.io/post/772956908/handy-osx-tip-cmd-shift-4-lets-you-screenshot-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 05 Jul 2010 14:35:12 GMT</pubDate></item><item><title><![CDATA[Five reasons why the shut-op operator (@) should be avoided — Derick Rethans]]></title><link>http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</link><guid isPermaLink="true">http://aaronbonner.io/post/753257891/five-reasons-why-the-shut-op-operator-should</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 30 Jun 2010 09:55:30 GMT</pubDate></item><item><title><![CDATA[Re-trying failed worldpay payment notifications]]></title><description><![CDATA[<p>If for some reason a Worldpay Business Gateway callback fails, it can often be difficult to manually complete an order.</p>&#13;
<p>The quickest way, is actually very simple, and that is to simply use curl to resubmit the callback. This is assuming you have found, and if necessary corrected, the issue that stopped the callback succeeding in the first  place.</p>&#13;
<p>When a callback fails, either through a time out or some internal (500) error, you will receive an email from Worldpay notifying you. This mail includes two attachments one the request data Worldpay sent to your callback URL including the encoded post data as well as the response from your server.</p>&#13;
<p>Now assuming you work around your security arrangements (e.g your callback URL should listen only to certain addresses and expect auth credentials), you can use curl to resubmit your post data:</p>&#13;
<pre>curl -d ‘testMode=0&amp;authCost=716.86&amp;currency=GBP&amp;address=15+Somewhere+Street&amp;countryString=United+Kingdom&amp;callbackPW=xyz&amp;installation=2555555&amp;fax=&amp;countryMatch=Y&amp;transId=1000000000&amp;AVS=2222&amp;amountString=%26%23163%3B716.86&amp;postcode=XX11+1XX&amp;msgType=authResult&amp;name=Mr+Test+Tester&amp;tel=0208+111111&amp;transStatus=Y&amp;desc=Test+Transaction+1234&amp;cardType=Visa+Delta&amp;lang=en&amp;transTime=1277741069650&amp;authAmountString=%26%23163%3B716.86&amp;authAmount=716.86&amp;ipAddress=80.80.80.80&amp;cost=716.86&amp;charenc=UTF-8&amp;instId=2555555&amp;amount=716.86&amp;compName=Testing Tester&amp;_SP.charEnc=UTF-8&amp;country=GB&amp;rawAuthMessage=cardbe.msg.authorised&amp;authCurrency=GBP&amp;email=testy@tester.com&amp;cartId=12345678&amp;rawAuthCode=A&amp;authMode=A’ \<br />'https://mysite.com/callback'</pre>&#13;
<p>Now your program can handle the completion of the order as normal.</p>&#13;
<p>You can make this more sophisticated, by having a mail reader retrieve any missed payment notifications and parse them. With each failure notification you can add a retry to a job queue, or other batch processing stack. It would be worth adding some sort of retry threshold though, as well as sending notifications to developers to ensure there is no serious malfunction occurring.</p> ]]></description><link>http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</link><guid isPermaLink="true">http://aaronbonner.io/post/748919080/re-trying-failed-worldpay-payment-notifications</guid><category><![CDATA[worldpay]]></category><category><![CDATA[curl]]></category><category><![CDATA[ecommerce]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 29 Jun 2010 08:20:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're using Solr for search, 1.4.1 was released over the weekend with a considerable number of bug fixes: http://bit.ly/cwx2Pu]]></description><link>http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</link><guid isPermaLink="true">http://aaronbonner.io/post/745471569/if-youre-using-solr-for-search-141-was</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:16:32 GMT</pubDate></item><item><title><![CDATA[Converting Files From the Command Line in OSX with Textutil]]></title><description><![CDATA[<p>Really handy tip this one, via the mactricksandtips.com website. See here: http://www.mactricksandtips.com/2010/06/converting-files-in-terminal-including-docx.html</p>&#13;
<p>For the impatient, the cli programme that weaves this magic is called 'textutil'.</p> ]]></description><link>http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</link><guid isPermaLink="true">http://aaronbonner.io/post/745461588/converting-files-from-the-command-line-in-osx-with</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 28 Jun 2010 12:11:57 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[A tour of eclipse helios http://bit.ly/9Dol5y]]></description><link>http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</link><guid isPermaLink="true">http://aaronbonner.io/post/729766049/a-tour-of-eclipse-helios-httpbitly9dol5y</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 24 Jun 2010 00:26:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I imagine a future with a computer on every desktop and in every home. And not one of them running windows. http://bit.ly/cR6gW6]]></description><link>http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</link><guid isPermaLink="true">http://aaronbonner.io/post/726476777/i-imagine-a-future-with-a-computer-on-every</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 22 Jun 2010 20:34:38 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Looks like release builds for #eclipse #helios popped up on Friday see: http://bit.ly/c3PMal]]></description><link>http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</link><guid isPermaLink="true">http://aaronbonner.io/post/721799357/looks-like-release-builds-for-eclipse-helios</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 13:00:58 GMT</pubDate></item><item><title><![CDATA[Take advantage of svn:externals]]></title><description><![CDATA[<p>There are some good arguments against doing this, but I'll cover those another time.</p>&#13;
<p>But  anyway, using svn:externals to include external libraries is a convenient way of moving third party libraries out of your own repository.</p>&#13;
<p>For example, many PHP applications today, have the Zend Framework as a dependency. To include the library within your application without actually having to download the tarball and use something like phing to set it up in place, you can simply do a</p>&#13;
<p><code>svn propedit svn:externals</code></p>&#13;
<p>Then in your editor, add the following line (each externals definition goes on a new line)</p>&#13;
<p><code>Zend http://framework.zend.com/svn/framework/standard/tags/release-1.10.5/</code></p>&#13;
<p>Set this property on your standard application library directory, or somewhere in your include path.</p>&#13;
<p>When you run svn update, svn will pull down the Zend library into your defined location.</p>&#13;
<p>The Zend Framework, out of the box, comes with some dojo dependencies that are optionally bundled with the framework. You can use the same approach to pull in the dojo deps by setting svn:externals definitions for your javascript include dir and pointing them at http://svn.dojotoolkit.org/src/tags/release-1.4.3/ (or trunk if you're brave).</p>&#13;
<p>Again, there are some good arguments for NOT doing this, but for me, the convenience of this approach outweighs those (which I'll cover in more detail later).</p> ]]></description><link>http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</link><guid isPermaLink="true">http://aaronbonner.io/post/721560580/take-advantage-of-svnexternals</guid><category><![CDATA[svn]]></category><category><![CDATA[develoment tools]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 21 Jun 2010 11:03:00 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4.1.0]]></title><description><![CDATA[<p>So far I've encountered a couple issues, one which truly boggles the mind.</p>&#13;
<p>The first one is the one you're most likely to encounter after upgrading</p>&#13;
<p><span> Fatal error: Call to a member function toHtml() on a non-object in &lt;somewhere&gt;/app/code/core/Mage/Core/Model/Layout.php on line <em>52</em></span></p>&#13;
<p><span><strong><em><span>The solution is explained here:</span></em></strong></span></p>&#13;
<p>http://rackspeed.de/forum/magento-faq-installation-and-updates/fatal-error-call-to-a-member-function-tohtml-on-a-non-object-layout-php-on-line-529-a-547</p>&#13;
<p>Basically edit your theme's layout/page.xml and change &lt;block type="core/profiler" output="toHtml"/&gt; to &lt;block type="core/profiler" output="toHtml" name="core_profiler"/&gt;</p>&#13;
<p>The next two are frankly amazing in that they ever made it into the release.</p>&#13;
<p>See here:</p>&#13;
<p>http://www.magentocommerce.com/boards/viewthread/195761/</p>&#13;
<p>To quote the finder of these issues 'Yes, preparing a shipment prepares an invoice instead..'</p>&#13;
 ]]></description><link>http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</link><guid isPermaLink="true">http://aaronbonner.io/post/715513370/upgrading-to-magento-1410</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 19 Jun 2010 17:25:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#magento #wtf Prepare a shipment by preparing an invoice... $shipment = Mage::getModel('sales/service_order', $this)-&gt;prepareInvoice($qtys);]]></description><link>http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</link><guid isPermaLink="true">http://aaronbonner.io/post/712116026/magento-wtf-prepare-a-shipment-by-preparing-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 18 Jun 2010 17:50:42 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mac OSX 10.4 includes some optimisations to apple mail. Very welcome indeed!]]></description><link>http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</link><guid isPermaLink="true">http://aaronbonner.io/post/704508801/mac-osx-104-includes-some-optimisations-to-apple</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 16 Jun 2010 13:50:16 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#Eclipse Helios RC4 builds out see: http://bit.ly/aJNzMO]]></description><link>http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</link><guid isPermaLink="true">http://aaronbonner.io/post/687014607/eclipse-helios-rc4-builds-out-see</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 11 Jun 2010 13:30:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[It might be worth noting that Apache solr trunk (1.5) has spatial search support. See http://bit.ly/bHFQa4]]></description><link>http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</link><guid isPermaLink="true">http://aaronbonner.io/post/681107187/it-might-be-worth-noting-that-apache-solr-trunk</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 09 Jun 2010 20:12:46 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Do people still use the 80 character margin anymore? I do, and I'm wondering why? It's not like we're coding on a VAX terminal anymore...]]></description><link>http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</link><guid isPermaLink="true">http://aaronbonner.io/post/676481227/do-people-still-use-the-80-character-margin</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 08 Jun 2010 12:37:02 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you're trying a recent RC build of Eclipse Helios, best you use Subversive for the moment. Helios+Subclipse is an epic fail right now...]]></description><link>http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</link><guid isPermaLink="true">http://aaronbonner.io/post/673469390/if-youre-trying-a-recent-rc-build-of-eclipse</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 16:21:41 GMT</pubDate></item><item><title><![CDATA[More Case-sensitivity Madness in Magento]]></title><description><![CDATA[<p>Magento rewrites work behave differently when overriding a helper class compared to overriding a block class.</p>&#13;
<p>In short, when overriding a helper, the context element IS case sensitive. With blocks, it is NOT.</p> ]]></description><link>http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/673161177/more-case-sensitivity-madness-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 07 Jun 2010 14:07:00 GMT</pubDate></item><item><title><![CDATA[Magento 1.4 Changes PDF Address Format]]></title><description><![CDATA[<p>In Magento 1.4, the address format used in PDFs has changed. At this point I'm not sure if this is a deliberate or unintended change. It effects addresses such that they are all displayed on a single line. If you prefer the older, multi-line address display you will need override the formatting in your local.xml file, or with a module (don't be tempted to hack the Mage/Customer/etc/config.xml…).<br /><br />Please note, within a &lt;![CDATA block, spacing *is* important, hence the lack of indenting.</p>&#13;
<p><code>&lt;![CDATA[<br />{{depend prefix}}{{var prefix}} {{/depend}}{{var firstname}} {{depend middlename}}{{var middlename}} {{/depend}}{{var lastname}}{{depend suffix}} {{var suffix}}{{/depend}}|<br />{{depend company}}{{var company}}|{{/depend}}<br />{{var street1}}|<br />{{depend street2}}{{var street2}}|{{/depend}}<br />{{depend street3}}{{var street3}}|{{/depend}}<br />{{depend street4}}{{var street4}}|{{/depend}}<br />{{depend city}}{{var city}},  {{/depend}} {{var region}} {{depend postcode}}{{var postcode}} {{/depend}}|<br />{{var country}}|<br />{{depend telephone}}T: {{var telephone}}{{/depend}}|<br />{{depend fax}}&lt;br /&gt;F: {{var fax}}{{/depend}}|<br />]]&gt;</code></p>&#13;
<p>The format may be slightly terse but hopefully you can see how it can be applied to create your own desired format.</p> ]]></description><link>http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</link><guid isPermaLink="true">http://aaronbonner.io/post/665827490/magento-14-changes-pdf-address-format</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:33:32 GMT</pubDate></item><item><title><![CDATA[Magento Model Camelcasing Convention]]></title><description><![CDATA[<p>Magento models have slightly eccentric camelcasing 'conventions'. In short do not camelcase your model names.</p>&#13;
<p>With magento models, you may be tempted to use camelcasing for long class names, for example MyPackage_MyModule_Model_ALongNameForAModel.  This will not work as you might expect on different (namely case sensitive) environments.</p>&#13;
<p>Your Mage::getModel(mymodule/a_long_name_for_a_model) call will translate to the class loader trying to find A/Long/Name/For/A/Model.php.</p>&#13;
<p>Conversely trying to address your model Mage::getModel(mymodule/alongnameforamodel) will see the classloader trying to load Alongnameforamodel.php, and not ALongNameForAModel.php.</p>&#13;
<p>On Windows this is fine, on case-sensitive e.g., HFS (Mac) or Unix file-system, this will not work.</p> ]]></description><link>http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</link><guid isPermaLink="true">http://aaronbonner.io/post/665819340/magento-model-camelcasing-convention</guid><category><![CDATA[camel casing]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:29:06 GMT</pubDate></item><item><title><![CDATA[Upgrading to Magento 1.4 with Custom Layouts]]></title><description><![CDATA[<p>One of the (many) gotchyas upgrading to Magento 1.4 is an issue with CMS pages that use a custom layout.<br /><br />In 1.4 the template layout has changed materially. Previously you will have had a frontend/default/default theme, and probably a frontend/default/custom theme.<br /><br />Now it's frontend/base/default and frontend/custom/default.<br /><br />This will bite you, because magento stores the path of the custom theme in the database for each cms_page.<br /><br />So if your old custom theme was default/custom, you will need to update the database to change it to custom/default or set it to null.<br /><br />You can avoid this by editing your custom pages and manually saving each one, but that's painful.<br /><br />Instead connect directly to your database and issue the query:</p>&#13;
<p><code>UPDATE cms_page SET custom_theme = 'custom/default' where custom_theme='default/custom';</code></p> ]]></description><link>http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</link><guid isPermaLink="true">http://aaronbonner.io/post/665810291/upgrading-to-magento-14-with-custom-layouts</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 05 Jun 2010 07:24:29 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[You may (or not) have wondered: "Should I download 32bit or 64bit Cocoa Eclipse'. The answer depends on how much work you want to get done..]]></description><link>http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</link><guid isPermaLink="true">http://aaronbonner.io/post/661008297/you-may-or-not-have-wondered-should-i-download</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In my experience, 64bit = lots of coffee, and not much work.]]></description><link>http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</link><guid isPermaLink="true">http://aaronbonner.io/post/661008275/in-my-experience-64bit-lots-of-coffee-and-not</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 03 Jun 2010 21:08:30 GMT</pubDate></item><item><title><![CDATA[Speedy attribute updates in Magento]]></title><description><![CDATA[<p>If like me, you take an unsophisticated approach to batch product updates in Magento, you may have noticed it can be a little slow.</p>
<p>As one of my clients&#39; sites has grown, some batch updates were taking up to 30 minutes to run. This is too long.</p>
<p>If the changes you are making are just to update simple attributes, for example we have a sales ranking attribute, you can use the following code to update the value without incurring the massive overhead of a full product save.</p>
<p><code>$product->setNumSales(1234); 
$product->getResource()->saveAttribute($product, 'num_sales'); 
</code></p>
<p>The saveAttribute method takes two parameters, the first is the model containing the attribute value, the second the attribute code. To find out the attribute code, look it up in either the db (eav_attribute) or in the admin backend under catalog-&gt;attribute.</p>
<p>Using the getResource()--&gt;saveAttribute() call, takes 1/5s, doing a full save(), takes 2-3 seconds. When iterating over a large product base, that is HUGE.</p>
<p><em>Update 4 Mar 2014</em> - Please take a look at DannyD&#39;s comment below for a more robust approach to mass attribute updates.</p>
]]></description><link>http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/657154675/speedy-attribute-updates-in-magento</guid><category><![CDATA[magento]]></category><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 02 Jun 2010 17:22:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Eclipse Helios RC2 is out now which includes an updated PDT build see: http://bit.ly/bl7Er3]]></description><link>http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</link><guid isPermaLink="true">http://aaronbonner.io/post/652786897/eclipse-helios-rc2-is-out-now-which-includes-an</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 01 Jun 2010 09:15:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Helios PDT RC1 links for osx cocoa x64 appear to be borked on the eclipse site, go here: http://bit.ly/cQTWX9]]></description><link>http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265085/helios-pdt-rc1-links-for-osx-cocoa-x64-appear-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[#PDT/Helios is getting much better, the move to dltk finally starting to pay off. Syntax highlighting is still broken though]]></description><link>http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</link><guid isPermaLink="true">http://aaronbonner.io/post/628265072/pdthelios-is-getting-much-better-the-move-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 14:09:05 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RC1 of Eclipse (Helios/3.6) is out. PDT package available as well see eclipse.org]]></description><link>http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</link><guid isPermaLink="true">http://aaronbonner.io/post/627731708/rc1-of-eclipse-helios36-is-out-pdt-package</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 24 May 2010 09:22:43 GMT</pubDate></item><item><title><![CDATA[HTTP Status Codes (auf Deutsch)]]></title><description><![CDATA[<p> </p>]]></description><link>http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</link><guid isPermaLink="true">http://aaronbonner.io/post/618832058/http-status-codes-auf-deutsch</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 10:05:30 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @AdrianSchneider: Looks like PHP will be adding scalar type hinting sometime soon... sweet! (via @padraicb)]]></description><link>http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</link><guid isPermaLink="true">http://aaronbonner.io/post/618341150/rt-adrianschneider-looks-like-php-will-be-adding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 21 May 2010 05:24:17 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Seaking of dodgy ecomm, that leads to magento, and therefore to exceptions. Did'ya know you can rethrow in a try/catch block with throw $ex]]></description><link>http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351697/seaking-of-dodgy-ecomm-that-leads-to-magento-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[basename and dirname bash builtins are handy and behave similar to their php based cousins. See here for application: http://bit.ly/cYDqHa]]></description><link>http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</link><guid isPermaLink="true">http://aaronbonner.io/post/616351691/basename-and-dirname-bash-builtins-are-handy-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Turns out, you can install greasemonkey scripts in Google Chrome without any extensions at all!]]></description><link>http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</link><guid isPermaLink="true">http://aaronbonner.io/post/616351689/turns-out-you-can-install-greasemonkey-scripts-in</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 14:45:09 GMT</pubDate></item><item><title><![CDATA[Mysql's DATE_SUB function]]></title><description><![CDATA[<p>Mysql has many great date handling functions, but one that I don't see used very often is <a title="DATE_SUB" href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-sub">DATE_SUB</a>, which is fantastic at handling time intervals.</p>&#13;
<p>For example, say you want to select all records from a table that has a date set more than 1 month old (compared to the current time).</p>&#13;
<p><code>SELECT * FROM table WHERE DATE_SUB(CURDATE(),INTERVAL 30 DAY) &gt; date_column</code></p>&#13;
<p>This assumes your date column is of type date, or datetime, although using FROM_UNIXTIME will allow you to work with timestamps as well.</p>&#13;
<p>The date_sub call, if ran on 2010-05-20, will return 2010-04-20. So if your date_column field has a date greater than this value, it is therefore more than 30 days old.</p>&#13;
<p>This is particularly handy when generating reports.</p> ]]></description><link>http://aaronbonner.io/post/616158707/mysqls-datesub-function</link><guid isPermaLink="true">http://aaronbonner.io/post/616158707/mysqls-datesub-function</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 20 May 2010 13:09:00 GMT</pubDate></item><item><title><![CDATA[Hacking core config.xml files in Magento]]></title><description><![CDATA[<p>In a word, don't!</p>&#13;
<p>I just saw an insane suggestion on a Magento forum, advising users to change their admin url by hacking app/code/core/<span class="Apple-style-span"><span class="Apple-style-span">Mage/Adminhtml/etc/config.xml.</span></span></p>&#13;
<p><span class="Apple-style-span"><span class="Apple-style-span">This is dangerous and pointless. Dangerous, because when you attempt to upgrade magento, you will at best lose your custom configuration, and at worst, cause an upgrade to fail. Pointless, because you can achieve the same effect by making changes to app/etc/local.xml.</span></span></p>&#13;
<p><br />This file overrides any configuration set elsewhere and by default, actually already has a definition for a custom admin url. When you install Magento, the installer sets up this file up by subbing in the details you provide the installer into local.xml.template.</p>&#13;
<p>So to change your admin url, after running the installer, simply fire up an editor and open app/etc/local.xml and look for this:</p>&#13;
<p><code>&lt;admin&gt;<br />    &lt;routers&gt;<br />        &lt;adminhtml&gt;<br />            &lt;args&gt;<br />                &lt;frontName&gt;&lt;![CDATA[admin]]&gt;&lt;/frontName&gt;<br />            &lt;/args&gt;<br />        &lt;/adminhtml&gt;<br />    &lt;/routers&gt;<br />&lt;/admin&gt; </code></p>&#13;
<p>Change the admin (inside the CDATA declaration) to foobar, or gobbles, or topsecret, whatever. Much easier, much safer than hacking away at core magento configuration files.</p> ]]></description><link>http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/607341865/hacking-core-configxml-files-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 17:02:00 GMT</pubDate></item><item><title><![CDATA[Creating a Custom Template Block in Magento]]></title><description><![CDATA[<p>I originally wrote this for a client and I thought it might help others out there trying to do the same thing. Adding template content to Magento views ought to be really simple, but sadly due to a lack of good documentation, it is anything but.</p>&#13;
<p>To add a text template to a view in magento, you need to consider  three aspects</p>&#13;
<ol><li>The template block itself</li>&#13;
<li>The block, or layout, the template will sit in</li>&#13;
<li>The layout .xml configuration</li>&#13;
</ol><p>The first element is simplest, create some html, stuff it into a  .phtml file and copy it to a directory within your theme, which  resides (relative to the store root dir) in  'app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates'. I'm going to assume the package/theme is default/default from here forward.</p>&#13;
<p>You will now need to configure one or more layout xml templates  (app/design/frontend/default/default/layout') to refer to your new template  block. For most general purpose, globally available blocks, this will be  'page.xml'.</p>&#13;
<p>For example, to add a productfinder template to the three column  layout, you need to edit page.xml and within the</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"page/html"</span> name=<span class="code-quote">"root"</span> output=<span class="code-quote">"toHtml"</span> template=<span class="code-quote">"page/3columns.phtml"</span>&gt;</span><br />...<br /><span class="code-tag">&lt;/block&gt;</span></pre>&#13;
<p>block add the following code near the closing &lt;/block&gt;:</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;block type=<span class="code-quote">"core/template"</span> name=<span class="code-quote">"my_product_finder"</span> as=<span class="code-quote">"my_product_finder"</span> template=<span class="code-quote">"templatedir/productfinder.phtml"</span>/&gt;</span></pre>&#13;
<p>Type="core/template' refers to a Magento class  Mage_Core_Block_Template. This type can be different if you want a  specific type of block. But core/template is the simplest way to include  some text on a page. The 'name' and 'as' attributes allow you to  reference the block in your enclosing templates, e.g. 3column.phtml, or  other layout .xml files. The template attribute is the relative path to  the text template you want to include and is relative to the theme  template root i.e.  app/design/frontend/&lt;package&gt;/&lt;theme&gt;/templates.</p>&#13;
<p>Once you have created a layout definition for your block, including  the template on a page is easy. In 3column.phtml, for example, simply  put &lt;?php $this-&gt;getChildHtml('my_product_finder') ?&gt; and the  text will be included. Remember to refresh, or clear Magento's cache  first though.</p>&#13;
<p>One more element to consider is exclusions. If you put the above code  in 3column.phtml, that text will appear on every page that uses that  layout template. You may not, for example, want to include it on  customer dashboard pages. In this case  you will need to define a remove  statement within the layout.xml for the group of pages you want to  change. In this example, we need to edit customer.xml</p>&#13;
<p>In customer.xml (and indeed page.xml) you will see a number of  elements similar to &lt;default&gt;&lt;/default&gt; (which applies to  ALL pages), &lt;customer_account&gt;&lt;/customer_account&gt; (which  refers to all pages with the url customer/account/), and  &lt;customer_account_edit&gt;&lt;/customer_account_edit&gt; (which  refers to all pages with the url customer/account/edit/). Including your  remove declaration in the right element here allows fine grained  control over which pages your blocks appear.</p>&#13;
<p>In the example here, we want to remove the product finder from ALL  customer account pages. Therefore within the &lt;customer_account&gt;  element we add the code</p>&#13;
<pre class="code-xml"><span class="code-tag">&lt;customer_account&gt;</span><br />...<br /><br /><span class="code-tag">&lt;reference name=<span class="code-quote">"root"</span>&gt;</span><br /><span class="code-tag">&lt;remove name=<span class="code-quote">"my_product_finder"</span>&gt;</span><span class="code-tag">&lt;/remove&gt;</span><br /><span class="code-tag">&lt;/reference&gt;</span><br /><br /><span class="code-tag">&lt;/customer_account&gt;</span><br /></pre>&#13;
<p>The reference name=root element ensures we are altering the root  block, which is the one in which we defined our template block in  page.xml. The enclosed remove call, uses the name of the block we  created "mns_product_finder" to identify the block we want to remove for  these pages.</p>&#13;
<p>That's it! All pages that exist under &lt;customer_account&gt; (so  all of them) have this block removed.</p> ]]></description><link>http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/606455710/creating-a-custom-template-block-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 17 May 2010 09:01:00 GMT</pubDate></item><item><title><![CDATA[Issues with Magento and Memcache]]></title><description><![CDATA[<p>If you're running an up-to-date version of memcached (1.4.4) and a recent PECL memcache extension, you may encounter difficulties deleting cached keys. See here: http://www.php.net/manual/en/function.memcache-delete.php#94536. Be sure to read the comments as there is a lot of useful information regarding the issue.</p>&#13;
<p>To see if you are affected run memcached in -vvv mode and look out for this:</p>&#13;
<p><code>CLIENT_ERROR bad command line format.  Usage: delete &lt;key&gt; [noreply]</code></p>&#13;
<p>This causes magento to throw an exception with the message ""can't get stat from localhost:11211".</p>&#13;
<p>The problem is down to a change in behaviour of memcache in version 1.4 where they removed the second parameter passed to the delete function. See here for more info http://code.google.com/p/memcached/wiki/ReleaseNotes144. 1.4.4 thankfully added some backwards compatibility.</p>&#13;
<p>Magento 1.4 uses Zend Framework 1.9.6 and this, sadly, does not support these changes. Version 1.10.3+ does.</p>&#13;
<p>http://framework.zend.com/issues/browse/ZF-9376</p> ]]></description><link>http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</link><guid isPermaLink="true">http://aaronbonner.io/post/600486902/issues-with-magento-and-memcache</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 15 May 2010 09:52:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[I'll say this for macbook pros, they bounce from a metre height very well!]]></description><link>http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</link><guid isPermaLink="true">http://aaronbonner.io/post/589125036/ill-say-this-for-macbook-pros-they-bounce-from-a</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 11 May 2010 08:48:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: Loading #magento library into my Eclipse PHP M7 Helios package took 7 seconds, content assist is immediate, go M7 &gt;&gt; ...]]></description><link>http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</link><guid isPermaLink="true">http://aaronbonner.io/post/579216642/rt-royganor-loading-magento-library-into-my</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 07 May 2010 17:46:44 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If you use a case sensitive HPS filesystem you wont be able to install #sc2 on mac osx. See here for the workaround: http://bit.ly/bKlEcA]]></description><link>http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</link><guid isPermaLink="true">http://aaronbonner.io/post/576500393/if-you-use-a-case-sensitive-hps-filesystem-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 06 May 2010 16:43:03 GMT</pubDate></item><item><title><![CDATA[Character encoding and Mac OSX]]></title><description><![CDATA[<p>More utf-8 fun, if you're using Mac OSX and the terminal, you'll sometimes run into trouble with character encoding.</p>&#13;
<p>I use iTerm as my terminal of choice, and it natively supports utf-8. However by default, your terminal environment does not. I have my environment configured to use the de_DE (German) locale. However some characters, most typically ü and ä do not display correctly.</p>&#13;
<p>In order to get correct text in your utf-8 terminal you need to configure your environment to use a utf-8 enabled locale. If you're using en_GB, or de_DE (and you can get a list of available locales by calling 'locale -a') you just need to edit /etc/profile or ~/.profile or ~/.bash_profile, and put the line export LC_ALL='de_DE.UTF-8' or LC_ALL='en_GB.UTF-8'</p>&#13;
<p>This will ensure your terminal AND environment are speaking the same language.</p> ]]></description><link>http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</link><guid isPermaLink="true">http://aaronbonner.io/post/570462273/character-encoding-and-mac-osx</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 09:01:28 GMT</pubDate></item><item><title><![CDATA[A word or two on character encoding]]></title><description><![CDATA[<p>Until PHP goes utf-8 native in PHP, we have to pay particular attention to the way we handle extended ASCII. For example I just fixed a simple issue for a client where they were pasting content from word into their cms. In their browser, certain quote characters, copyright and trademark symbols appeared as question marks. </p>
<p>This happens when you try to render non utf-8 text as utf-8. Utf-8, shares the same 7-bit ASCII range as common windows encodings cp-1252 and iso-8859-1. Lower ASCII is the basic alphabet e.g. a-z, A-Z, hyphens, commas, etc. However extended characters, like accents, symbols and umlauts change. So a cp-1252 trademark symbol has a different code in utf-8. Try to render a cp-1252 copyright on utf-8 you will just see a question mark in the browser, as that code does not have a corresponding entity in utf-8. </p>
<p>To fix this you need to normalise your content. If it&#39;s going into mysql, ensure your tables are set to your normalised format. I recommend using utf-8 as this normal form. </p>
<p>To normalise your text, use iconv, which is available as a php extension. </p>
<p>$utf8Text = iconv(&#39;iso8859-1&#39;, &#39;utf-8&#39;, $isoText);</p>
<p>When outputing to the browser run this through htmlentities or htmlspecislchars. Both these functions expect iso-8859-1 input. To correctly prepare your utf-8 text for output you need to supply a third parameter to these functions supplying your text&#39;s encoding, in this case utf-8. </p>
<p>htmlentities($text, null, &#39;utf-8&#39;);</p>
<p>Failing to supply this parameter means your text will be broken and your extended characters displayed as questiomarks. </p>
<p>PHP provides all the tools you need to properly handle many different text encodings. As a developer you just need to normalise your input, and properly encode your output. </p>
]]></description><link>http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</link><guid isPermaLink="true">http://aaronbonner.io/post/570402643/a-word-or-two-on-character-encoding</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 04 May 2010 08:17:06 GMT</pubDate></item><item><title><![CDATA[Making in-place substitutions with Sed]]></title><description><![CDATA[<p>Sed has a handy (but dangerous!) in-place substitution ability, but it can be tricky to use.</p>&#13;
<p>From the man and help pages it would seem doing sed -i 's/hello/goodbye/g' helloworld.txt would be the way to achieve it.</p>&#13;
<p>It doesn't, and you'll either get a script processing error or something like: sed: -i: No such file or directory.</p>&#13;
<p>The trick, is to use it like this: sed -i '' 's/hello/goodbye/g' helloworld.txt </p>&#13;
&#13;
 ]]></description><link>http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</link><guid isPermaLink="true">http://aaronbonner.io/post/560827386/making-in-place-substitutions-with-sed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:51:56 GMT</pubDate></item><item><title><![CDATA[Sed Oneliners]]></title><description><![CDATA[<p>Great set of quick sed oneliners for quick file 'fixing'. See http://sed.sourceforge.net/sed1line.txt</p> ]]></description><link>http://aaronbonner.io/post/560801546/sed-oneliners</link><guid isPermaLink="true">http://aaronbonner.io/post/560801546/sed-oneliners</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 12:34:08 GMT</pubDate></item><item><title><![CDATA[Magento and URLs]]></title><description><![CDATA[<p>I see, all too often, Magento custom templates with hardcoded urls. The main problem with this, is it makes it hard to create a testing or staging environment. Magento's documentation is largely non-existent though, so I can understand why some template developers do it.</p>&#13;
<p>Still, it's very easy to get the baseurl directly from magento, below I've summarised the most common ones you'll want.</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);</strong><br /> http://myshop.mydomain.com/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_LINK);</strong><br /> http://myshop.mydomain.com/index.php/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_JS);</strong> http://myshop.mydomain.com/js/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_MEDIA);<br /></strong>http://myshop.mydomain.com/media/</p>&#13;
<p><strong>Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_SKIN);</strong><br />http://myshop.mydomain.com/skin/</p>&#13;
<p>So, say you want to create a link to your store's checkout page you would use the following code:</p>&#13;
<pre>&lt;a href="&lt;?php Mage::getBaseUrl(Mage_Core_Model_Store::URL_TYPE_WEB);?&gt;checkout/onepage/"&gt;Checkout&lt;/a&gt;</pre>&#13;
<p>It's well worth taking a look at the getBaseUrl function in the Mage class(app/Mage.php) and Mage_Core_Model_Store (app/code/core/Mage/Core/Model/Store.php)</p> ]]></description><link>http://aaronbonner.io/post/560552271/magento-and-urls</link><guid isPermaLink="true">http://aaronbonner.io/post/560552271/magento-and-urls</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 30 Apr 2010 09:00:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[echo $# gives you the number of arguments passed into your #bash script. With if [ $# -lt n ] you can ensure your script is called properly]]></description><link>http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</link><guid isPermaLink="true">http://aaronbonner.io/post/550425577/echo-gives-you-the-number-of-arguments-passed</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 26 Apr 2010 09:51:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[In osx, occasionally a file will get locked and even a sudo rm -rf wont delete it. Use 'sudo SetFile -a l &lt;file&gt;' to unlock it.]]></description><link>http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</link><guid isPermaLink="true">http://aaronbonner.io/post/543759983/in-osx-occasionally-a-file-will-get-locked-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[With #macports #mysql remember to call mysql_install_db5 with sudo -u _mysql]]></description><link>http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</link><guid isPermaLink="true">http://aaronbonner.io/post/543759979/with-macports-mysql-remember-to-call</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 23 Apr 2010 20:03:11 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[External ip address from the commandline made simple: 'curl ip.appspot.com' Done.]]></description><link>http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</link><guid isPermaLink="true">http://aaronbonner.io/post/520608972/external-ip-address-from-the-commandline-made</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 14 Apr 2010 11:25:52 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[The 2.2 interim builds of eclipse #pdt for #php editing are a big improvement over 2.1 in speed. Syntax highlighting is a bit off though?]]></description><link>http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</link><guid isPermaLink="true">http://aaronbonner.io/post/515435643/the-22-interim-builds-of-eclipse-pdt-for-php</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 10:42:58 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for installing recent versions of PDT: http://bit.ly/9BCtBa]]></description><link>http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/515289481/tips-for-installing-recent-versions-of-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Mon, 12 Apr 2010 08:49:34 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Tips for (unit)testing mail functionality w/ #zf @AdrianSchneider's here: http://bit.ly/czITqv and @akrabat's here: http://bit.ly/9jIRMH]]></description><link>http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</link><guid isPermaLink="true">http://aaronbonner.io/post/507967616/tips-for-unittesting-mail-functionality-w-zf</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 09 Apr 2010 10:35:06 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @weierophinney: Some #php 5.3 discoveries and gotchas: http://short.ie/php53primer /corrected link!]]></description><link>http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</link><guid isPermaLink="true">http://aaronbonner.io/post/500907569/rt-weierophinney-some-php-53-discoveries-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[If getting started with Apache solr, remember to read up on the difference between the 'DisMax' and 'Standard' search request handlers]]></description><link>http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</link><guid isPermaLink="true">http://aaronbonner.io/post/500907564/if-getting-started-with-apache-solr-remember-to</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 06 Apr 2010 15:51:07 GMT</pubDate></item><item><title><![CDATA[An effective 'foreach' loop in bash]]></title><description><![CDATA[<p>I have been hacking away in bash a little bit lately and it's remarkable how such a simple set of utilities can allow you to perform quite complex tasks.</p>&#13;
<p>I had an array of files that I wanted to do some operations on and the following construct allowed me to easily iterate through that list.</p>&#13;
<pre>FILES=( a/path/to/a/file1 a/path/to/another/file2 and/so/on/and/so/on )&#13;
&#13;
for ELEMENT in $(seq 0 $((${#FILES[@]} - 1))); do&#13;
  echo ${FILES[$ELEMENT]}&#13;
done </pre>&#13;
<p>The $(seq 0 $(($#FILES[@]} - 1))); returns the number of elements in FILE, the seq command produces a sequence of numbers from x to y. If you call seq 0 4, you will get a line with 0, 1, 2, 3, 4 on it.</p>&#13;
<p>So while the syntax is a little smelly, the terse power of it, is quite handy.</p> ]]></description><link>http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/491102496/an-effective-foreach-loop-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 02 Apr 2010 12:21:04 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Handy way to read a file in line by line in bash: exec 3&lt; myfile; while read &lt;&amp;3; do echo $REPLY; done; exec &lt;&amp;3-;]]></description><link>http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</link><guid isPermaLink="true">http://aaronbonner.io/post/489058063/handy-way-to-read-a-file-in-line-by-line-in-bash</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 14:05:51 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @royganor: "Installing Eclipse PDT" http://bit.ly/cZmnC4 &gt;&gt; nice series of posts by @senf]]></description><link>http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</link><guid isPermaLink="true">http://aaronbonner.io/post/488677908/rt-royganor-installing-eclipse-pdt</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 01 Apr 2010 08:50:53 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Writing unit tests is like running 10 miles. You don't always like it much at the time, but you're almost always better off for it.]]></description><link>http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</link><guid isPermaLink="true">http://aaronbonner.io/post/486410879/writing-unit-tests-is-like-running-10-miles-you</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Rather than mocking your #ZF Db class, it's worth looking at Zend_Test_DbAdapter and its bff Zend_Test_DbStatement. Docs are scarce though.]]></description><link>http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</link><guid isPermaLink="true">http://aaronbonner.io/post/486410877/rather-than-mocking-your-zf-db-class-its-worth</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 31 Mar 2010 10:42:21 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FUSE is sweet: sshfs name@server:/path/to/folder /path/to/mount/point]]></description><link>http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</link><guid isPermaLink="true">http://aaronbonner.io/post/484117792/fuse-is-sweet-sshfs-nameserverpathtofolder</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 12:51:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Setting your Mac OSX system hostname from the commandline: sudo scutil --set HostName myhostname.local]]></description><link>http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</link><guid isPermaLink="true">http://aaronbonner.io/post/483861215/setting-your-mac-osx-system-hostname-from-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 30 Mar 2010 09:20:20 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Awesome website for command line lovers: http://bit.ly/ayThir]]></description><link>http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</link><guid isPermaLink="true">http://aaronbonner.io/post/474559847/awesome-website-for-command-line-lovers</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 26 Mar 2010 10:57:15 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @chrisgrice: oh wow - (from http://bit.ly/al8kM5) save a file you edited in vim without the needed permissions: :w !sudo tee %]]></description><link>http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</link><guid isPermaLink="true">http://aaronbonner.io/post/470677546/rt-chrisgrice-oh-wow-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 18:57:35 GMT</pubDate></item><item><title><![CDATA[Command Line Encryption in Magento]]></title><description><![CDATA[<p>Part of any good developer's toolkit, is a set of tools to aid the creation of a 'staging' or 'test' site that mirrors the production environment as closely as possible.</p>&#13;
<p>Often, you will want to change some details, baseurls, use test payment or shipping account details.</p>&#13;
<p>Generally this is pretty easy to script up in bash, python or even plain php. However magento encrypts some of the data it stores in core_config_data.</p>&#13;
<p>The following is the approach I use to update data in core_config_data  in an encrypted format magento will accept:</p>&#13;
<pre>&lt;?php&#13;
require_once 'app/Mage.php';&#13;
umask(0);&#13;
$app = Mage::app('default');&#13;
$data = $_SERVER['argv'][1];&#13;
$obj = Mage::getModel('core/encryption');&#13;
$helper = Mage::helper('core');&#13;
$obj-&gt;setHelper($helper);&#13;
echo $obj-&gt;encrypt($data);&#13;
</pre> ]]></description><link>http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</link><guid isPermaLink="true">http://aaronbonner.io/post/470143334/command-line-encryption-in-magento</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 24 Mar 2010 12:50:00 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Anyone with Snow Leopard and an MS external keyboard/mouse would be well advised to install intelli(type|point) 7.1 - kernel panics gone!]]></description><link>http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</link><guid isPermaLink="true">http://aaronbonner.io/post/468481549/anyone-with-snow-leopard-and-an-ms-external</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Tue, 23 Mar 2010 19:46:19 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[RT @mraible: My TSSJS 2010 Presentations and Summary: http://bit.ly/9mt1Iv #tssjs]]></description><link>http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</link><guid isPermaLink="true">http://aaronbonner.io/post/460740142/rt-mraible-my-tssjs-2010-presentations-and</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Mysql 5.0 and Zend_Date::getIso() do not always play nicely together. For date comparisons it's best to use getString() rather than getIso()]]></description><link>http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</link><guid isPermaLink="true">http://aaronbonner.io/post/460740141/mysql-50-and-zenddategetiso-do-not-always</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Sat, 20 Mar 2010 10:29:48 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Just remember to set a default locale with the zend_framework, otherwise zend_date will start doing strange things for americans...]]></description><link>http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</link><guid isPermaLink="true">http://aaronbonner.io/post/459128887/just-remember-to-set-a-default-locale-with-the</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Fri, 19 Mar 2010 17:09:09 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Want to resize a directory of pictures? Try this one liner: for file in *; do convert -scale 1024x768 $file resized_$file; done]]></description><link>http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</link><guid isPermaLink="true">http://aaronbonner.io/post/454488388/want-to-resize-a-directory-of-pictures-try-this</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Wed, 17 Mar 2010 14:23:41 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[Here's a hot tip, don't use single quotes in Zend Framework application.ini files, it will cause hair loss...]]></description><link>http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</link><guid isPermaLink="true">http://aaronbonner.io/post/441330701/heres-a-hot-tip-dont-use-single-quotes-in-zend</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 15:32:39 GMT</pubDate></item><item><title><![CDATA[No title]]></title><description><![CDATA[FInding the last day of the previous month, from the shell: cal `date --date 'last month' '+%m'` `date '+%Y'` | grep . | fmt -1 | tail -1]]></description><link>http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</link><guid isPermaLink="true">http://aaronbonner.io/post/440106254/finding-the-last-day-of-the-previous-month-from</guid><dc:creator><![CDATA[Aaron Bonner]]></dc:creator><pubDate>Thu, 11 Mar 2010 01:35:27 GMT</pubDate></item></channel></rss>